Releases: ruotianluo/self-critical.pytorch
Releases · ruotianluo/self-critical.pytorch
3.2
Official py3 support; MODEL ZOO
- Since it's 2020, py3 is officially supported. Open an issue if there is still something wrong.
- Finally, there is a model zoo which is relatively complete. Feel free to try the provided models.
v3
- Add structure loss inspired by Classical Structured Prediction Losses for Sequence to Sequence Learning
- Add a function of sample n captions. Support methods described in https://www.dropbox.com/s/tdqr9efrjdkeicz/iccv.pdf?dl=0.
- More pytorchy design of dataloader. Also, the dataloader now don't repeat image features according to seq_per_img. The repeating is now moved to the model forward function.
- Add multi-sentence sampling evaluation metrics like mBleu, Self-CIDEr etc. (those described in https://www.dropbox.com/s/tdqr9efrjdkeicz/iccv.pdf?dl=0)
- Use detectron type of config to setup experiments.
- A better self critical objective. (Named as new_self_critical now.)
Use config ymls that end with nsc to test the performance. A technical report will be out soon.
Basically, it performs better than original SCST on all metrics (by a small margin), but also faster (by a little bit).
Add flickr30k support
2.3 Update ADVANCED.md
Add a few more things
1 Refactor the code a little bit.
2 Add BPE (didn’t seem to work much different)
3 Add nucleus sampling, topk and gumbel softmax sampling.
4 Make AttEnsemble compatible with transformer
5 Add remove bad ending from Improving Reinforcement Learning Based Image Captioning with Natural Language Prior
Just add a few things
- Add loss_wrapper for multi-gpu loss computation
- Fix some bugs
- Add transformer.
Add new features.
- Add support for bleu4 optimization or combination of bleu4 and cider
- Add bottom-up feature support
- Add ensemble during evaluation.
- Add multi-gpu support.
- Add miscellaneous things. (box features; experimental models etc.)
Self critical model
This version can replicate the self-critical sequence training paper.