The target audience is all the members of the NLP group and other possible interested participants.
The meeting takes place weekly during term for one hour usually on Mondays, 13-14:30pm. See below a list of the upcoming and past meetings.
The meetings of the group are informal and no necessary preparation will be required with the exception of the moderator reading the current paper and the rest having at least a brief overview of it.
There is also a #readinggroup channel in the NLP group's inofficial Slack channel: https://usfd-nlp.slack.com/messages
A list of past meetings before 2018/19 can be found here.
-
Mon 18 Feb 2019
- Paper: (Dehghani et al., 2019), ArXiv -- [Universal Transformers] (https://arxiv.org/abs/1807.03819)
- Moderator: Hardy
- Room: G25
-
Mon 11 Feb 2019
- Paper: (Houlsby et al., 2019), ArXiv -- Parameter-Efficient Transfer Learning for NLP
- Moderator: Fred
- Room: G12-Blue
-
Mon 04 Feb 2019
- Paper: Howard and Ruder (ACL, 2018) Universal Language Model Fine-tuning for Text Classification
- Moderator: Nikos
- Room: G25
-
Mon 28 Jan 2019
- Paper: Surya et al. (2019). Unsupervised Neural Text Simplification
- Moderator: Fernando
- Room: G25
-
Wed 12 Dec 2018
- Paper:
- Wu etal (2018): Word Mover's Embedding: From Word2Vec to Document Embedding https://arxiv.org/abs/1811.01713
- Wu etal (2018): D2KE: From Distance to Kernel and Embedding
- Moderator: Johann
- Room: COM-G25
- Paper:
-
Wed 05 Dec 2018
- NAACL submissions review (deadline: Dec 10)
- Room: G-25
-
Wed 28 Nov 2018
- Paper: I'd like to try and look at the following three because they are all somewhat related, but with the focus on BERT and the transformer-based architectures and representations:
- Devlin etal (2018): BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding https://arxiv.org/abs/1810.04805
- Cer etal (2018): Universal Sentence Encoder https://arxiv.org/abs/1803.11175
- Yang etal (2018): Learning Semantic Textual Similarity from Conversations https://arxiv.org/abs/1804.07754
- Note: it will help to know the paper: Vaswani etal (2017): Attention is all you need https://arxiv.org/abs/1706.03762
- Moderator: Johann
- Room: COM-G25
- Paper: I'd like to try and look at the following three because they are all somewhat related, but with the focus on BERT and the transformer-based architectures and representations:
-
Wed 21 Nov 2018
- Paper: Peters et al (2018), Deep contextualized word representations, NAACL 2018
- Moderator: Carol
- Room: COM-G25
- Related papers mentioned:
- word senses in embeddings: Arora etal 2016: Linear Algebraic Structure of Word Senses, with Applications to Polysemy. http://arxiv.org/abs/1601.03764
- combining different vector spaces: Coates and Bollegala 2018: Frustratingly Easy Meta-Embedding -- Computing Meta-Embeddings by Averaging Source Word Embeddings http://aclweb.org/anthology/N18-2031
- BPEs for MT: Sennrich et al. (2016): Neural Machine Translation of Rare Words with Subword Units. http://www.aclweb.org/anthology/P16-1162
-
Wed 14 Nov 2018
- Paper: EMNLP recap: papers presented
- Moderator: Zeerak
- Room: COM-G22 Blue
-
Wed 07 Nov 2018
- NLP seminar (RG is postponed)
-
Wed 31 Oct 2018
- Paper: Zheng et al (2018), Multi-Reference Training with Pseudo-References for Neural Translation and Text Generation, in EMNLP 2018
- Moderator: Makis
- Room: COM-G25
-
Wed 24 Oct 2018
- Paper: Xing and Paul (2018), Diagnosing and Improving Topic Models by Analyzing Posterior Variability, In AAAI
- Moderator: Areej
- Room: COM-G25
-
Wed 17 Oct 2018
- Paper: Dong, Quirk and Lapata (2018), Confidence Modeling for Neural Semantic Parsing, In ACL
- Moderator: Nikos
- Room: COM-G25
-
Wed 10 Oct 2018
- Kick-off meeting
- Room: COM-G22 Blue