Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LanceDB support #239

Closed
fsiefken opened this issue Aug 15, 2024 · 3 comments · Fixed by #254
Closed

LanceDB support #239

fsiefken opened this issue Aug 15, 2024 · 3 comments · Fixed by #254
Labels
enhancement New feature or request

Comments

@fsiefken
Copy link

fsiefken commented Aug 15, 2024

While Qdrant has local support and provides a docker image, LanceDB, another Rust vector database, runs serverless and in process, similar to Sqlite.
Qdrant has no GPU support yet when building the index, LanceDB does.
https://lancedb.github.io/lancedb/ann_indexes/#creating-an-ivf_pq-index

Query wise LanceDB appears to be as fast or faster then other vector databases and has lower memory requirements due to it being serverless.
https://blog.lancedb.com/benchmarking-lancedb-92b01032874a/
https://github.com/prrao87/lancedb-study

These can potentially speed up RAG pipelines compared to using Qdrant
https://blog.lancedb.com/accelerating-deep-learning-workflows-with-lance/

Rust API pointers:
https://docs.rs/lancedb/latest/lancedb/
https://lancedb.github.io/lancedb/reranking/
https://towardsdatascience.com/scale-up-your-rag-a-rust-powered-indexing-pipeline-with-lancedb-and-candle-cc681c6162e8

@timonv
Copy link
Member

timonv commented Aug 15, 2024

Absolutely! I was actually investigating it yesterday to set it up.

@timonv timonv added the enhancement New feature or request label Aug 15, 2024
@fsiefken
Copy link
Author

fsiefken commented Aug 15, 2024

Embedding wise LanceDB only supports OpenAI.... while Qdrant appears to support everything.
https://lancedb.github.io/lancedb/embeddings/embedding_functions/
https://qdrant.tech/documentation/embeddings/

I wonder could Ollama provide embeddings to LanceDB through an OpenAI compatible api?
https://ollama.com/blog/embedding-models
ollama/ollama#2416

@timonv
Copy link
Member

timonv commented Aug 15, 2024

It looks like you can define a schema and store the vectors. I think we outperform the embedding speed of lancedb if we do it ourselves 👯

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants