Skip to content
This repository has been archived by the owner on May 31, 2023. It is now read-only.

Latest commit

 

History

History
127 lines (116 loc) · 10.1 KB

File metadata and controls

127 lines (116 loc) · 10.1 KB

Others

  • [EMNLP 2021] (paper) Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP
  • [EMNLP 2021] (paper) CrossFit: A Few-shot Learning Challenge for Cross-task Generalization in NLP

PLMs

  • [NIPS 2020] Language Models are Few-Shot Learners
    • GPT-3
  • [ACL 2020] Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained Conversational Representations
  • [EMNLP 2020] Word Frequency Does Not Predict Grammatical Knowledge in Language Models
  • [EMNLP 2020] Unsupervised Distillation of Syntactic Information from Contextualized Word Representations
  • [IJCAI 2019] Meta-Learning for Low-resource Natural Language Generation in Task-oriented Dialogue Systems
  • [EMNLP2020] Augmented Natural Language for Generative Sequence Labeling
  • [ICCV 2019] (paper) Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
  • [ACL 2021] Making Pre-trained Language Models Better Few-shot Learners
  • [EMNLP 2021] (paper) Want To Reduce Labeling Cost? GPT-3 Can Help
  • [EMNLP 2021] (paper) Discovering Representation Sprachbund For Multilingual Pre-Training
  • [EMNLP 2021] (paper) Continuous Entailment Patterns for Lexical Inference in Context
  • [EMNLP 2021] Low-Resource Dialogue Summarization with Domain-Agnostic Multi-Source Pretraining
  • [EMNLP 2021] STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
  • [EMNLP 2021] Language Models are Few-Shot Butlers
    • 强化学习
  • [EMNLP 2021] ConvFiT: Conversational Fine-Tuning of Pretrained Language Models
  • [EMNLP 2021] (paper) Single-dataset Experts for Multi-dataset Question Answering
    • Build a strong PTMs by using the specific dataset

Prompt Based Method

  • [EMNLP 2021] (paper) Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
  • [EMNLP 2021] (paper) Discrete and Soft Prompting for Multilingual Models
  • [NAACL 2021] Learning How to Ask: Querying LMs with Mixtures of Soft Prompts

Training Procedure

  • [ACL 2021] Bi-Granularity Contrastive Learning for Post-Training in Few-Shot Scene
  • [ACL 2021] Reordering Examples Helps during Priming-based Few-Shot Learning
  • [EMNLP 2021] Learning from Uneven Training Data: Unlabeled, Single Label, and Multiple Labels
  • [EMNLP 2021] (paper) Revisiting Self-Training for Few-Shot Learning of Language Model

Incremental Learning

  • [EMNLP 2021] (paper) Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning

Tasks

query rewriting

relation relevant

  • [ICML 2020] (paper) Few-shot Relation Extraction via Bayesian Meta-learning on Task Graph
  • [AAAI 2019] Hybrid Attention-based Prototypical Networks for Noisy Few-Shot Relation Classification
    • Relation Classification with FewRel
  • [AAAI 2020] Neural Snowball for Few-Shot Relation Learning
  • [AAAI 2020] Few-Shot Knowledge Graph Completion (关系抽取)
  • [AAAI 2018] Few Shot Transfer Learning Between Word Relatedness and Similarity Tasks Using A Gated Recurrent Siamese Network
  • [CIKM 2020] Enhance Prototypical Networks with Text Descriptions for Few-shot Relation Classification
  • [CIKM 2020] MICK: A Meta-Learning Framework for Few-shot Relation Classification with Small Training Data
  • [NIPS 2020] (paper) Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction
  • [WWW 2021] (paper) Zero-shot Learning for Relation Extraction
  • [ICLR 2021] (paper) Prototypical Representation Learning for Relation Extraction
  • [AAAI 2021] (paper) FL-MSRE: A Few-Shot Learning Based Approach to Multimodal Social Relation Extraction
  • [EMNLP 2021] (paper) MapRE: An Effective Semantic Mapping Approach for Low-resource Relation Extraction
  • [EMNLP 2021] (paper) Label Verbalization and Entailment for Effective Zero- and Few-Shot Relation Extraction
  • [KDD 2021] (paper) Knowledge-Enhanced Domain Adaptation in Few-Shot Relation Classification

NER

  • [ACL 2021] Entity Concept-enhanced Few-shot Relation Extraction
  • [ACL 2021] Few-NERD: A Few-shot Named Entity Recognition Dataset
  • [ACL 2021] Learning from Miscellaneous Other-Class Words for Few-shot Named Entity Recognition

event detection

  • [ACL 2021] Few-Shot Event Detection with Prototypical Amortized Conditional Random Field
  • [ACL 2021] Adaptive Knowledge-Enhanced Bayesian Meta-Learning for Few-shot Event Detection
  • [WSDM 2020] Meta-learning with dynamic-memory-based prototypical network for few-shot event detection
  • [SIGIR 2021] Graph Learning Regularization and Transfer Learning for Few-Shot Event Detection maybe similar
    • [ACL 2020] Hypernymy Detection for Low-Resource Languages via Meta Learning
    • [ACL 2021] Multi-Label Few-Shot Learning for Aspect Category Detection
    • [ACL 2021] Few-Shot Upsampling for Protest Size Detection
    • [ACL 2021] Enhancing Zero-shot and Few-shot Stance Detection with Commonsense Knowledge Graph

question answering

  • [ACMMM 2018] Fast Parameter Adaptation for Few-shot Image Captioning and Visual Question Answering
  • [ACL 2021] Few-Shot Question Answering by Pretraining Span Selection
  • [EMNLP 2021] (paper) FewshotQA: A simple framework for few-shot learning of question answering tasks using pre-trained text-to-text models
  • [EMNLP 2021] (paper) Contrastive Domain Adaptation for Question Answering using Limited Text Corpora

sentiment analysis

  • [ACL 2021] UserAdapter: Few-Shot User Learning in Sentiment Analysis

other applications

  • [EMNLP 2020] Structural Supervision Improves Few-Shot Learning and Syntactic Generalization in Neural Language Models
  • [NIPS 2018] Neural Voice Cloning with a Few Samples
  • [ACMMM 2018] Few-Shot Adaptation for Multimedia Semantic Indexing
  • [AAAI 2019] Few-Shot Image and Sentence Matching via Gated Visual-Semantic Embedding
    • Image and Sentence Matching
  • [ICLR 2020] FEW-SHOT LEARNING ON GRAPHS VIA SUPERCLASSES BASED ON GRAPH SPECTRAL MEASURES
  • [EMNLP 2019] Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs
  • [EMNLP 2019] Adapting Meta Knowledge Graph Information for Multi-Hop Reasoning over Few-Shot Relations
  • [EMNLP 2019] FewRel 2.0: Towards More Challenging Few-Shot Relation Classification
  • [INTERSPEECH 2020] An Investigation of Few-Shot Learning in Spoken Term Classification
  • [EMNLP 2020] (paper)Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation
  • [ICASSP 2021] (paper) Investigating on Incorporating Pretrained and Learnable Speaker Representations for Multi-Speaker Multi-Style Text-to-Speech
  • [EACL 2021] (paper) Self-Training Pre-Trained Language Models for Zero- and Few-Shot Multi-Dialectal Arabic Sequence Labeling
  • [EACL 2021] (paper) El Volumen Louder Por Favor: Code-switching in Task-oriented SemanticParsing
  • [EACL 2021] (paper) Few-Shot Semantic Parsing for New Predicates
  • [EACL 2021] (paper) FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary
  • [SIGIR 2021] Few-shot Variational Reasoning for Medical Dialogue Generation
  • [SIGIR 2021] Relational Learning with Gated and Attentive Neighbor Aggregator for Few-Shot Knowledge Graph Completion
  • [EACL 2021] Few-shot learning through contextual data augmentation
  • [EACL 2021] Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs
  • [ACL 2020] Shaping Visual Representations with Language for Few-shot Classification
    • jointly predicting natural language task descriptions at training time
    • How can we let language guide representation learning in machine learning models?
  • [ACL 2021] TextSETTR: Few-Shot Text Style Extraction and Tunable Targeted Restyling
  • [ACL 2021] A Closer Look at Few-Shot Crosslingual Transfer: The Choice of Shots Matters
  • [EMNLP 2021] AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models
  • [EMNLP 2021] Semi-Supervised Exaggeration Detection of Health Science Press Releases
  • [EMNLP 2021] Learning Opinion Summarizers by Selecting Informative Reviews
  • [NAACL 2021] Non-Parametric Few-Shot Learning for Word Sense Disambiguation
  • [NAACL 2021] On Unifying Misinformation Detection
  • [NAACL 2021] It's Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners
  • [NAACL 2021] Improving Zero and Few-Shot Abstractive Summarization with Intermediate Fine-tuning and Data Augmentation
  • [NAACL 2021] Towards Few-shot Fact-Checking via Perplexity
  • [NAACL 2021] DReCa: A General Task Augmentation Strategy for Few-Shot Natural Language Inference
  • [EMNLP 2021] Few-Shot Emotion Recognition in Conversation with Sequential Prototypical Networks
  • [EMNLP 2021] "It doesn't look good for a date": Transforming Critiques into Preferences for Conversational Recommendation Systems
  • [EMNLP 2021] What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
  • [EMNLP 2021] Examining Cross-lingual Contextual Embeddings with Orthogonal Structural Probes
  • [EMNLP 2021] COVR: A test-bed for Visually Grounded Compositional Generalization with real images
    • VQA