Skip to content

PyTorch implementation of some attentions for Deep Learning Researchers.

License

Notifications You must be signed in to change notification settings

muleina/attentions

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

87 Commits
 
 
 
 
 
 

Repository files navigation

An Apache 2.0 PyTorch implementation of some attentions for Deep Learning Researchers.


Intro

attentions provides some attentions used in natural language processing using pytorch.
these attentions can used in neural machine translation, speech recognition, image captioning etc...

image

attention allows to attend to different parts of the source sentence at each step of the output generation.
Instead of encoding the input sequence into a single fixed context vector, we let the model learn how to generate a context vector for each output time step.

Implementation list

Name Citation
Additive Attention Bahdanau et al., 2015
Dot-Product Attention Luong et al., 2015
Location-Aware (Location Sensitive) Attention Chorowski et al., 2015
Scaled Dot-Product Attention Vaswani et al., 2017
Multi-Head Attention Vaswani et al., 2017
Relative Multi-Head Self Attention ZihangDai et al., 2019

Troubleshoots and Contributing

If you have any questions, bug reports, and feature requests, please open an issue on Github.
or Contacts [email protected] please.

I appreciate any kind of feedback or contribution. Feel free to proceed with small issues like bug fixes, documentation improvement. For major contributions and new features, please discuss with the collaborators in corresponding issues.

Code Style

I follow PEP-8 for code style. Especially the style of docstrings is important to generate documentation.

Author

About

PyTorch implementation of some attentions for Deep Learning Researchers.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%