Skip to content

Commit

Permalink
Process metadata corrections for 2024.acl-long.191 (closes #4556)
Browse files Browse the repository at this point in the history
  • Loading branch information
mjpost committed Feb 9, 2025
1 parent 4db0828 commit c0c1ac1
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions data/xml/2024.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -2663,11 +2663,11 @@
</paper>
<paper id="191">
<title><fixed-case>L</fixed-case>lama2<fixed-case>V</fixed-case>ec: Unsupervised Adaptation of Large Language Models for Dense Retrieval</title>
<author><first>Chaofan</first><last>Li</last></author>
<author><first>Zheng</first><last>Liu</last></author>
<author><first>Chaofan</first><last>Li</last></author>
<author><first>Shitao</first><last>Xiao</last></author>
<author><first>Yingxia</first><last>Shao</last><affiliation>Beijing University of Posts and Telecommunications</affiliation></author>
<author><first>Defu</first><last>Lian</last><affiliation>University of Science and Technology of China</affiliation></author>
<author><first>Yingxia</first><last>Shao</last></author>
<author><first>Defu</first><last>Lian</last></author>
<pages>3490-3500</pages>
<abstract>Dense retrieval calls for discriminative embeddings to represent the semantic relationship between query and document. It may benefit from the using of large language models (LLMs), given LLMs’ strong capability on semantic understanding. However, the LLMs are learned by auto-regression, whose working mechanism is completely different from representing whole text as one discriminative embedding. Thus, it is imperative to study how to adapt LLMs properly so that they can be effectively initialized as the backbone encoder for dense retrieval. In this paper, we propose a novel approach, called <b>Llama2Vec</b>, which performs unsupervised adaptation of LLM for its dense retrieval application. Llama2Vec consists of two pretext tasks: EBAE (Embedding-Based Auto-Encoding) and EBAR (Embedding-Based Auto-Regression), where the LLM is prompted to <i>reconstruct the input sentence</i> and <i>predict the next sentence</i> based on its text embeddings. Llama2Vec is simple, lightweight, but highly effective. It is used to adapt LLaMA-2-7B on the Wikipedia corpus. With a moderate steps of adaptation, it substantially improves the model’s fine-tuned performances on a variety of dense retrieval benchmarks. Notably, it results in the new state-of-the-art performances on popular benchmarks, such as passage and document retrieval on MSMARCO, and zero-shot retrieval on BEIR. The model and source code will be made publicly available to facilitate the future research. Our model is available at https://github.com/FlagOpen/FlagEmbedding.</abstract>
<url hash="e0092648">2024.acl-long.191</url>
Expand Down

0 comments on commit c0c1ac1

Please sign in to comment.