Skip to content

embeddings-benchmark/results

Repository files navigation

benchmark type submission_name
mteb
evaluation
MTEB

Note

Previously, it was possible to submit model results to MTEB by adding them to the metadata of the model card on huggingface. However, this is no longer possible as we want to ensure that we can match the results with the model implementation. If you want to add your model, please follow the guide on how to do so.

This repository contains the results of the embedding benchmark evaluated using the package mteb.

Reference
🦾 Leaderboard An up to date leaderboard of embedding models
📚 mteb Guides and instructions on how to use mteb, including running, submitting scores, etc.
🙋 Questions Questions about the results
🙋 Issues Issues or bugs you have found