Materials for researchers and technologists in the area of Natural Language Processing, Deep Learning, etc.
- Stanford CS224d: Deep Learning for Natural Language Processing
- Stanford CS231n: Convolutional Neural Networks for Visual Recognition
- UC Berkeley CS294: Deep Reinforcement Learning (video)
- Oxford: Deep Learning for Natural Language Processing (video)
- deeplearning.ai in Coursera: Deep Learning Specification
- Stanford CS20: Tensorflow for Deep Learning Research
- CCF-A -- ACL, NIPS, ICML, IJCAI, AAAI
- CCF-B -- EMNLP, COLING, ECAI
- CCF-C -- CoNLL, NAACL, ICTAI, ICANN, KSEM, ICONIP, ICPR, IJCNN, PRICAI, AISTATS, ACML
- others -- EACL, SEM, IJCNLP, ICSC, NLDB, LDK, PACLING, LTC, ,ICON, ICSC, CICLing
- CCF-A -- AI, TPAMI, JMLR
- CCF-B -- TSLP, Computational Linguistics, TASLP, JSLHR, TAP, DKE, Cybernetics, TFS, TNNLS, JAIR, Machine Learning, Neural Computation, Neural Networks, Pattern Recognition
- CCF-C -- TALIP, Computer Speech and Language, NLE, Applied Intelligence, Artificial Life, Computational Intelligence, EAAI, Expert System, ESWA, Fuzzy Sets and Systems, IJCIA, IJIS, IJNS, IJPRAI, NCA, NPL, Neurocomputing, PAA, PRL
- others -- TACL
- Oriol Vinyals etc. "Matching Networks for One Shot Learning." arXiv:1606.04080 (2016).
- Mengye Ren etc. "Meta-Learning for Semi-supervised Few-shot Classification." ICLR 2018.
- Chelsea Finn, Pieter Abbeel, & Sergey Levine. "Model-Agnostic Meta-Learning for Fast Adaption of Deep Networks." ICML 2017.(code)
- Zhenguo Li, Fengwei Zhou, Fei Chen, & Hang Li. "Meta-SGD: Learning to Learn Quickly for Few-Shot Learning." arXiv:1707.09835 (2017). (code)
- Fengwei Zhou, Bin Wu, & Zhenguo Li. "Deep Meta-Learning: Learning to Learn in the Concept Space." arXiv:1802.03596 (2018).
- Ashish Vaswani etc. "Attention is all you need." arXiv:1706.03762 (2017) (code1) (code2)
- Alexander M. Rush, Sumit Chopra, & Jason Wetson. "A Neural Attention Model for Sentence Summarization." EMNLP 2015.
- Sumit Chopra, Michael Auli, & Alexander M. Rush. "Abstractive Sentence Summarization with Attentive Recurrent Neural Networks." NAACL 2016. CODE
- Konstantin Lopyrev. "Generating News Headlines with Recurrent Neural Networks." arXiv:1512.01712 (2015).
- Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos santos, Caglar Gulcehre, & Bing Xiang. "Abstractive Text Summarization using Sequene-to-Sequence RNNs and Beyond." arXiv:1602.06023 (2016).
- Gulcehre, Caglar, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, & Yoshua Bengio. "Pointing the Unknown Words." arXiv:1603.08148 (2016).
- Jiatao Gu, Zhengdong Lu, Hang Li, & Victor O.K. Li. "Incorporating Copying Mechanism in Sequence-to-Sequence Learning." ACL 2016. CODE
- Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, & Manabu Okumura. "Controlling Output Length in Neural Encoder-Decoders." arXiv:1609.09552 (2016).
- Yishu Miao, Phil Blunsom. "Language as a Latent Variable: Discrete Generative Models for Sentence Compression." EMNLP 2016.
- Wenyuan Zeng, Wenjie Luo, Sanja Fidler, & Raquel Urtasun. "Efficient Summarization with Read-Again and Copy Mechanism." arXiv:1611.03382 (2016).
- Jingge Yao, Xiaojun Wan, & Jianguo Xiao. "Recent Advances in Document Summarization." Knowledge and Information Systems 2017.
- Jianpeng Cheng, & Mirella Lapata. "Neural Summarization by Extracting Sentences and Words" arXiv:1603.07252 (2016)
- Ramesh Nallapati, Feifei Zhai, & Bowen Zhou. "SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents." AAAI 2017.
- Abigail See, Peter J. Liu & Christopher D. Manning. "Get To The Point: Summarization with Pointer-Generator Networks." ACL 2017. CODE
- Romain Paulus, Caiming Xiong, & Richard Socher. "A Deep Reinforced Model for Abstractive Summarization." arXiv:1705.04304 (2017).
- Jiwei Tan, Xiaojun Wan, & Jianguo Xiao. "Abstractive Document Summarization with a Graph-based Attentional Neural Model." ACL 2017.
- Masaru Isonuma etc. "Extractive Summarization Using Multi-Task Learning with Document Classification." EMNLP 2017.
- Angela Fan, David Grangier, & Michael Auli. "Controllable Abstractive Summarization." arXiv:1711.05217 (2017).
- Piji Li etc. "Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization." EMNLP 2017.
- Tobias Falke, & Iryna Gurevych. "Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps." EMNLP 2017.
- Alex Graves, Greg Wayne, & Ivo Danihelka. "Neural turing machines." arXiv:1410.5401 (2014).
- Jason Weston, Sumit Chopra, & Antoine Bordes. "Memory networks." ICLR 2014.
- Sainbayar Sukhbaatar, Jason Weston, & Rob Fergus. "End-to-end memory networks." NIPS 2015.
- Ankit Kumar, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, & Richard Socher. "Ask me anything: Dynamic memory networks for natural language processing." arXiv:1506.07285 (2015).
- DUC-2001, 2002, 2003, 2004, 2005, 2006, 2007
- DUC-2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015
- Gigawords
- LCSTS (Chinese)
- CmapSum
- Opinosis
- CNN/DailyMail
- CNN
- Marco (Microsoft)
- NewsQA
- SQuAD
- PD&CFT (Chinese)
- bAbI (Facebook)
- GraphQuestions
- Story Cloze
- SimpleQuestions
- WikiQA
- Penn Treebank
- WSJ Corpus
- NEGRA German corpus
- Tiger corpus
- alpino Treebank
- Bultreebank
- Turin University Treebank
- prague dependency Treebank
- Kyunghyun Cho
- Andrej Karpathy
- Colah's blog
- Richard Socher
- Ian Goodfellow
- Chiyuan Zhang
- Peter Norvig
- Jason Wetson
- Noah A. Smith
- Yoav Goldberg
- Sebastian Ruder
- Oriol Vinyals
- Deep Learning Book
- Information Theory, Inference, and Learning Algorithms
- The Elements of Statistical Learning
- [The Study of Language](http://ocw.up.edu.ps/كلية التربية/EENG 2314.أ.زلفي بدر الدين .مقدمة في علم اللغويات/the study of language.pdf)