Skip to content

Commit

Permalink
Deprecating vector search APIs and updating README accordingly (rapid…
Browse files Browse the repository at this point in the history
…sai#2448)

I opted to deprecate just the necessary pieces, such as the `index` classes instead of deprecating every single function.

Authors:
  - Corey J. Nolet (https://github.com/cjnolet)

Approvers:
  - Ben Frederickson (https://github.com/benfred)

URL: rapidsai#2448
  • Loading branch information
cjnolet authored Sep 27, 2024
1 parent 5ee0e79 commit 6c4fdfb
Show file tree
Hide file tree
Showing 15 changed files with 239 additions and 312 deletions.
111 changes: 3 additions & 108 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# <div align="left"><img src="https://rapids.ai/assets/images/rapids_logo.png" width="90px"/>&nbsp;RAFT: Reusable Accelerated Functions and Tools for Vector Search and More</div>

> [!IMPORTANT]
> The vector search and clustering algorithms in RAFT are being migrated to a new library dedicated to vector search called [cuVS](https://github.com/rapidsai/cuvs). We will continue to support the vector search algorithms in RAFT during this move, but will no longer update them after the RAPIDS 24.06 (June) release. We plan to complete the migration by RAPIDS 24.08 (August) release.
> The vector search and clustering algorithms in RAFT are being migrated to a new library dedicated to vector search called [cuVS](https://github.com/rapidsai/cuvs). We will continue to support the vector search algorithms in RAFT during this move, but will no longer update them after the RAPIDS 24.06 (June) release. We plan to complete the migration by RAPIDS 24.10 (October) release and will be removing them altogether in the 24.12 (December) release.
![RAFT tech stack](img/raft-tech-stack-vss.png)

Expand Down Expand Up @@ -36,7 +36,7 @@

## What is RAFT?

RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
RAFT contains fundamental widely-used algorithms and primitives for machine learning and data mining. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.

By taking a primitives-based approach to algorithm development, RAFT
- accelerates algorithm construction time
Expand All @@ -47,12 +47,10 @@ While not exhaustive, the following general categories help summarize the accele
#####
| Category | Accelerated Functions in RAFT |
|-----------------------|-----------------------------------------------------------------------------------------------------------------------------------|
| **Nearest Neighbors** | vector search, neighborhood graph construction, epsilon neighborhoods, pairwise distances |
| **Basic Clustering** | spectral clustering, hierarchical clustering, k-means |
| **Solvers** | combinatorial optimization, iterative solvers |
| **Data Formats** | sparse & dense, conversions, data generation |
| **Dense Operations** | linear algebra, matrix and vector operations, reductions, slicing, norms, factorization, least squares, svd & eigenvalue problems |
| **Sparse Operations** | linear algebra, eigenvalue problems, slicing, norms, reductions, factorization, symmetrization, components & labeling |
| **Solvers** | combinatorial optimization, iterative solvers |
| **Statistics** | sampling, moments and summary statistics, metrics, model evaluation |
| **Tools & Utilities** | common tools and utilities for developing CUDA applications, multi-node multi-gpu infrastructure |

Expand All @@ -67,42 +65,6 @@ In addition being a C++ library, RAFT also provides 2 Python libraries:

![RAFT is a C++ header-only template library with optional shared library and lightweight Python wrappers](img/arch.png)

## Use cases

### Vector Similarity Search

RAFT contains state-of-the-art implementations of approximate nearest neighbors search (ANNS) algorithms on the GPU, such as:

* [Brute force](https://docs.rapids.ai/api/raft/nightly/pylibraft_api/neighbors/#brute-force). Performs a brute force nearest neighbors search without an index.
* [IVF-Flat](https://docs.rapids.ai/api/raft/nightly/pylibraft_api/neighbors/#ivf-flat) and [IVF-PQ](https://docs.rapids.ai/api/raft/nightly/pylibraft_api/neighbors/#ivf-pq). Use an inverted file index structure to map contents to their locations. IVF-PQ additionally uses product quantization to reduce the memory usage of vectors. These methods were originally popularized by the [FAISS](https://github.com/facebookresearch/faiss) library.
* [CAGRA](https://docs.rapids.ai/api/raft/nightly/pylibraft_api/neighbors/#cagra) (Cuda Anns GRAph-based). Uses a fast ANNS graph construction and search implementation optimized for the GPU. CAGRA outperforms state-of-the art CPU methods (i.e. HNSW) for large batch queries, single queries, and graph construction time.

Projects that use the RAFT ANNS algorithms for accelerating vector search include: [Milvus](https://milvus.io/), [Redis](https://redis.io/), and [Faiss](https://github.com/facebookresearch/faiss).

Please see the example [Jupyter notebook](https://github.com/rapidsai/raft/blob/HEAD/notebooks/VectorSearch_QuestionRetrieval.ipynb) to get started RAFT for vector search in Python.



### Information Retrieval

RAFT contains a catalog of reusable primitives for composing algorithms that require fast neighborhood computations, such as

1. Computing distances between vectors and computing kernel gramm matrices
2. Performing ball radius queries for constructing epsilon neighborhoods
3. Clustering points to partition a space for smaller and faster searches
4. Constructing neighborhood "connectivities" graphs from dense vectors

### Machine Learning

RAFT's primitives are used in several RAPIDS libraries, including [cuML](https://github.com/rapidsai/cuml), [cuGraph](https://github.com/rapidsai/cugraph), and [cuOpt](https://github.com/rapidsai/cuopt) to build many end-to-end machine learning algorithms that span a large spectrum of different applications, including
- data generation
- model evaluation
- classification and regression
- clustering
- manifold learning
- dimensionality reduction.

RAFT is also used by the popular collaborative filtering library [implicit](https://github.com/benfred/implicit) for recommender systems.

## Is RAFT right for me?

Expand Down Expand Up @@ -327,70 +289,3 @@ When citing RAFT generally, please consider referencing this Github project.
year={2022}
}
```
If citing the sparse pairwise distances API, please consider using the following bibtex:
```bibtex
@article{nolet2021semiring,
title={Semiring primitives for sparse neighborhood methods on the gpu},
author={Nolet, Corey J and Gala, Divye and Raff, Edward and Eaton, Joe and Rees, Brad and Zedlewski, John and Oates, Tim},
journal={arXiv preprint arXiv:2104.06357},
year={2021}
}
```

If citing the single-linkage agglomerative clustering APIs, please consider the following bibtex:
```bibtex
@misc{nolet2023cuslink,
title={cuSLINK: Single-linkage Agglomerative Clustering on the GPU},
author={Corey J. Nolet and Divye Gala and Alex Fender and Mahesh Doijade and Joe Eaton and Edward Raff and John Zedlewski and Brad Rees and Tim Oates},
year={2023},
eprint={2306.16354},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```

If citing CAGRA, please consider the following bibtex:
```bibtex
@misc{ootomo2023cagra,
title={CAGRA: Highly Parallel Graph Construction and Approximate Nearest Neighbor Search for GPUs},
author={Hiroyuki Ootomo and Akira Naruse and Corey Nolet and Ray Wang and Tamas Feher and Yong Wang},
year={2024},
series = {ICDE '24}
}
```

If citing the k-selection routines, please consider the following bibtex:

```bibtex
@proceedings{10.1145/3581784,
title = {Parallel Top-K Algorithms on GPU: A Comprehensive Study and New Methods},
author={Jingrong Zhang, Akira Naruse, Xipeng Li, and Yong Wang},
year = {2023},
isbn = {9798400701092},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
location = {Denver, CO, USA},
series = {SC '23}
}
```

If citing the nearest neighbors descent API, please consider the following bibtex:
```bibtex
@inproceedings{10.1145/3459637.3482344,
author = {Wang, Hui and Zhao, Wan-Lei and Zeng, Xiangxiang and Yang, Jianye},
title = {Fast K-NN Graph Construction by GPU Based NN-Descent},
year = {2021},
isbn = {9781450384469},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3459637.3482344},
doi = {10.1145/3459637.3482344},
abstract = {NN-Descent is a classic k-NN graph construction approach. It is still widely employed in machine learning, computer vision, and information retrieval tasks due to its efficiency and genericness. However, the current design only works well on CPU. In this paper, NN-Descent has been redesigned to adapt to the GPU architecture. A new graph update strategy called selective update is proposed. It reduces the data exchange between GPU cores and GPU global memory significantly, which is the processing bottleneck under GPU computation architecture. This redesign leads to full exploitation of the parallelism of the GPU hardware. In the meantime, the genericness, as well as the simplicity of NN-Descent, are well-preserved. Moreover, a procedure that allows to k-NN graph to be merged efficiently on GPU is proposed. It makes the construction of high-quality k-NN graphs for out-of-GPU-memory datasets tractable. Our approach is 100-250\texttimes{} faster than the single-thread NN-Descent and is 2.5-5\texttimes{} faster than the existing GPU-based approaches as we tested on million as well as billion scale datasets.},
booktitle = {Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},
pages = {1929–1938},
numpages = {10},
keywords = {high-dimensional, nn-descent, gpu, k-nearest neighbor graph},
location = {Virtual Event, Queensland, Australia},
series = {CIKM '21}
}
```
63 changes: 33 additions & 30 deletions cpp/include/raft/cluster/kmeans.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -86,13 +86,14 @@ using KeyValueIndexOp = detail::KeyValueIndexOp<IndexT, DataT>;
* @param[out] n_iter Number of iterations run.
*/
template <typename DataT, typename IndexT>
void fit(raft::resources const& handle,
const KMeansParams& params,
raft::device_matrix_view<const DataT, IndexT> X,
std::optional<raft::device_vector_view<const DataT, IndexT>> sample_weight,
raft::device_matrix_view<DataT, IndexT> centroids,
raft::host_scalar_view<DataT> inertia,
raft::host_scalar_view<IndexT> n_iter)
[[deprecated("Use cuVS instead")]] void fit(
raft::resources const& handle,
const KMeansParams& params,
raft::device_matrix_view<const DataT, IndexT> X,
std::optional<raft::device_vector_view<const DataT, IndexT>> sample_weight,
raft::device_matrix_view<DataT, IndexT> centroids,
raft::host_scalar_view<DataT> inertia,
raft::host_scalar_view<IndexT> n_iter)
{
detail::kmeans_fit<DataT, IndexT>(handle, params, X, sample_weight, centroids, inertia, n_iter);
}
Expand Down Expand Up @@ -150,14 +151,15 @@ void fit(raft::resources const& handle,
* their closest cluster center.
*/
template <typename DataT, typename IndexT>
void predict(raft::resources const& handle,
const KMeansParams& params,
raft::device_matrix_view<const DataT, IndexT> X,
std::optional<raft::device_vector_view<const DataT, IndexT>> sample_weight,
raft::device_matrix_view<const DataT, IndexT> centroids,
raft::device_vector_view<IndexT, IndexT> labels,
bool normalize_weight,
raft::host_scalar_view<DataT> inertia)
[[deprecated("Use cuVS instead")]] void predict(
raft::resources const& handle,
const KMeansParams& params,
raft::device_matrix_view<const DataT, IndexT> X,
std::optional<raft::device_vector_view<const DataT, IndexT>> sample_weight,
raft::device_matrix_view<const DataT, IndexT> centroids,
raft::device_vector_view<IndexT, IndexT> labels,
bool normalize_weight,
raft::host_scalar_view<DataT> inertia)
{
detail::kmeans_predict<DataT, IndexT>(
handle, params, X, sample_weight, centroids, labels, normalize_weight, inertia);
Expand Down Expand Up @@ -213,14 +215,15 @@ void predict(raft::resources const& handle,
* @param[out] n_iter Number of iterations run.
*/
template <typename DataT, typename IndexT>
void fit_predict(raft::resources const& handle,
const KMeansParams& params,
raft::device_matrix_view<const DataT, IndexT> X,
std::optional<raft::device_vector_view<const DataT, IndexT>> sample_weight,
std::optional<raft::device_matrix_view<DataT, IndexT>> centroids,
raft::device_vector_view<IndexT, IndexT> labels,
raft::host_scalar_view<DataT> inertia,
raft::host_scalar_view<IndexT> n_iter)
[[deprecated("Use cuVS instead")]] void fit_predict(
raft::resources const& handle,
const KMeansParams& params,
raft::device_matrix_view<const DataT, IndexT> X,
std::optional<raft::device_vector_view<const DataT, IndexT>> sample_weight,
std::optional<raft::device_matrix_view<DataT, IndexT>> centroids,
raft::device_vector_view<IndexT, IndexT> labels,
raft::host_scalar_view<DataT> inertia,
raft::host_scalar_view<IndexT> n_iter)
{
detail::kmeans_fit_predict<DataT, IndexT>(
handle, params, X, sample_weight, centroids, labels, inertia, n_iter);
Expand Down Expand Up @@ -252,13 +255,13 @@ void transform(raft::resources const& handle,
}

template <typename DataT, typename IndexT>
void transform(raft::resources const& handle,
const KMeansParams& params,
const DataT* X,
const DataT* centroids,
IndexT n_samples,
IndexT n_features,
DataT* X_new)
[[deprecated("Use cuVS instead")]] void transform(raft::resources const& handle,
const KMeansParams& params,
const DataT* X,
const DataT* centroids,
IndexT n_samples,
IndexT n_features,
DataT* X_new)
{
detail::kmeans_transform<DataT, IndexT>(
handle, params, X, centroids, n_samples, n_features, X_new);
Expand Down
68 changes: 36 additions & 32 deletions cpp/include/raft/cluster/kmeans_balanced.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -73,11 +73,11 @@ namespace raft::cluster::kmeans_balanced {
* datatype. If DataT == MathT, this must be the identity.
*/
template <typename DataT, typename MathT, typename IndexT, typename MappingOpT = raft::identity_op>
void fit(const raft::resources& handle,
kmeans_balanced_params const& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<MathT, IndexT> centroids,
MappingOpT mapping_op = raft::identity_op())
[[deprecated("Use cuVS instead")]] void fit(const raft::resources& handle,
kmeans_balanced_params const& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<MathT, IndexT> centroids,
MappingOpT mapping_op = raft::identity_op())
{
RAFT_EXPECTS(X.extent(1) == centroids.extent(1),
"Number of features in dataset and centroids are different");
Expand Down Expand Up @@ -131,12 +131,13 @@ template <typename DataT,
typename IndexT,
typename LabelT,
typename MappingOpT = raft::identity_op>
void predict(const raft::resources& handle,
kmeans_balanced_params const& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<const MathT, IndexT> centroids,
raft::device_vector_view<LabelT, IndexT> labels,
MappingOpT mapping_op = raft::identity_op())
[[deprecated("Use cuVS instead")]] void predict(
const raft::resources& handle,
kmeans_balanced_params const& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<const MathT, IndexT> centroids,
raft::device_vector_view<LabelT, IndexT> labels,
MappingOpT mapping_op = raft::identity_op())
{
RAFT_EXPECTS(X.extent(0) == labels.extent(0),
"Number of rows in dataset and labels are different");
Expand Down Expand Up @@ -196,12 +197,13 @@ template <typename DataT,
typename IndexT,
typename LabelT,
typename MappingOpT = raft::identity_op>
void fit_predict(const raft::resources& handle,
kmeans_balanced_params const& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<MathT, IndexT> centroids,
raft::device_vector_view<LabelT, IndexT> labels,
MappingOpT mapping_op = raft::identity_op())
[[deprecated("Use cuVS instead")]] void fit_predict(
const raft::resources& handle,
kmeans_balanced_params const& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<MathT, IndexT> centroids,
raft::device_vector_view<LabelT, IndexT> labels,
MappingOpT mapping_op = raft::identity_op())
{
auto centroids_const = raft::make_device_matrix_view<const MathT, IndexT>(
centroids.data_handle(), centroids.extent(0), centroids.extent(1));
Expand Down Expand Up @@ -255,14 +257,15 @@ template <typename DataT,
typename LabelT,
typename CounterT,
typename MappingOpT>
void build_clusters(const raft::resources& handle,
const kmeans_balanced_params& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<MathT, IndexT> centroids,
raft::device_vector_view<LabelT, IndexT> labels,
raft::device_vector_view<CounterT, IndexT> cluster_sizes,
MappingOpT mapping_op = raft::identity_op(),
std::optional<raft::device_vector_view<const MathT>> X_norm = std::nullopt)
[[deprecated("Use cuVS instead")]] void build_clusters(
const raft::resources& handle,
const kmeans_balanced_params& params,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_matrix_view<MathT, IndexT> centroids,
raft::device_vector_view<LabelT, IndexT> labels,
raft::device_vector_view<CounterT, IndexT> cluster_sizes,
MappingOpT mapping_op = raft::identity_op(),
std::optional<raft::device_vector_view<const MathT>> X_norm = std::nullopt)
{
RAFT_EXPECTS(X.extent(0) == labels.extent(0),
"Number of rows in dataset and labels are different");
Expand Down Expand Up @@ -334,13 +337,14 @@ template <typename DataT,
typename LabelT,
typename CounterT,
typename MappingOpT = raft::identity_op>
void calc_centers_and_sizes(const raft::resources& handle,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_vector_view<const LabelT, IndexT> labels,
raft::device_matrix_view<MathT, IndexT> centroids,
raft::device_vector_view<CounterT, IndexT> cluster_sizes,
bool reset_counters = true,
MappingOpT mapping_op = raft::identity_op())
[[deprecated("Use cuVS instead")]] void calc_centers_and_sizes(
const raft::resources& handle,
raft::device_matrix_view<const DataT, IndexT> X,
raft::device_vector_view<const LabelT, IndexT> labels,
raft::device_matrix_view<MathT, IndexT> centroids,
raft::device_vector_view<CounterT, IndexT> cluster_sizes,
bool reset_counters = true,
MappingOpT mapping_op = raft::identity_op())
{
RAFT_EXPECTS(X.extent(0) == labels.extent(0),
"Number of rows in dataset and labels are different");
Expand Down
Loading

0 comments on commit 6c4fdfb

Please sign in to comment.