Skip to content

Commit

Permalink
Merge pull request rapidsai#2508 from bdice/branch-25.02-merge-24.12
Browse files Browse the repository at this point in the history
Forward-merge branch-24.12 to branch-25.02
  • Loading branch information
raydouglass authored Nov 25, 2024
2 parents 31d6c5d + 54b857e commit cd18e64
Show file tree
Hide file tree
Showing 751 changed files with 2,033 additions and 80,282 deletions.
1 change: 0 additions & 1 deletion .github/workflows/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ jobs:
branch: ${{ inputs.branch }}
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
skip_upload_pkgs: libraft-template
docs-build:
if: github.ref_type == 'branch'
needs: python-build
Expand Down
12 changes: 1 addition & 11 deletions .github/workflows/pr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,16 +43,8 @@ jobs:
- '!README.md'
- '!docs/**'
- '!img/**'
- '!notebooks/**'
- '!python/**'
- '!thirdparty/LICENSES/**'
test_notebooks:
- '**'
- '!.devcontainer/**'
- '!.pre-commit-config.yaml'
- '!CONTRIBUTING.md'
- '!README.md'
- '!thirdparty/LICENSES/**'
test_python:
- '**'
- '!.devcontainer/**'
Expand All @@ -61,7 +53,6 @@ jobs:
- '!README.md'
- '!docs/**'
- '!img/**'
- '!notebooks/**'
- '!thirdparty/LICENSES/**'
checks:
secrets: inherit
Expand Down Expand Up @@ -89,7 +80,6 @@ jobs:
with:
build_type: pull-request
enable_check_symbols: true
symbol_exclusions: raft_cutlass
conda-python-build:
needs: conda-cpp-build
secrets: inherit
Expand Down Expand Up @@ -151,5 +141,5 @@ jobs:
cuda: '["12.5"]'
build_command: |
sccache -z;
build-all -DBUILD_PRIMS_BENCH=ON -DBUILD_ANN_BENCH=ON --verbose;
build-all -DBUILD_PRIMS_BENCH=ON --verbose;
sccache -s;
1 change: 0 additions & 1 deletion .github/workflows/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ jobs:
date: ${{ inputs.date }}
sha: ${{ inputs.sha }}
enable_check_symbols: true
symbol_exclusions: raft_cutlass
conda-cpp-tests:
secrets: inherit
uses: rapidsai/shared-workflows/.github/workflows/[email protected]
Expand Down
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,6 @@ log
dask-worker-space/
*.egg-info/
*.bin
bench/ann/data
temporary_*.json

## scikit-build
Expand Down
8 changes: 5 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ repos:
entry: ./cpp/scripts/run-cmake-format.sh cmake-format
language: python
types: [cmake]
exclude: .*/thirdparty/.*|.*FindAVX.cmake.*
exclude: .*/thirdparty/.*
# Note that pre-commit autoupdate does not update the versions
# of dependencies, so we'll have to update this manually.
additional_dependencies:
Expand Down Expand Up @@ -93,7 +93,10 @@ repos:
- id: codespell
additional_dependencies: [tomli]
args: ["--toml", "pyproject.toml"]
exclude: (?x)^(^CHANGELOG.md$)
exclude: |
(?x)
^CHANGELOG[.]md$|
^cpp/cmake/patches/cutlass/build-export[.]patch$
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
Expand All @@ -114,7 +117,6 @@ repos:
cpp/include/raft/neighbors/detail/faiss_select/|
cpp/include/raft/thirdparty/|
docs/source/sphinxext/github_link[.]py|
cpp/cmake/modules/FindAVX[.]cmake
- id: verify-alpha-spec
- repo: https://github.com/rapidsai/dependency-file-generator
rev: v1.16.0
Expand Down
30 changes: 11 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# <div align="left"><img src="https://rapids.ai/assets/images/rapids_logo.png" width="90px"/>&nbsp;RAFT: Reusable Accelerated Functions and Tools for Vector Search and More</div>

> [!IMPORTANT]
> The vector search and clustering algorithms in RAFT are being migrated to a new library dedicated to vector search called [cuVS](https://github.com/rapidsai/cuvs). We will continue to support the vector search algorithms in RAFT during this move, but will no longer update them after the RAPIDS 24.06 (June) release. We plan to complete the migration by RAPIDS 24.10 (October) release and will be removing them altogether in the 24.12 (December) release.
> The vector search and clustering algorithms in RAFT have been formally migrated to a new library dedicated to vector search called [cuVS](https://github.com/rapidsai/cuvs). The headers for the vector search and clustering algorithms in RAFT will remain for a bried period, but will no longer be tested, benchmarked, included in the pre-compiled libraft binary, or otherwise updated after the 24.12 (December 2024) release. We will be removing these headers altogether in a future release. It is strongly suggested to use cuVS for these routines, which include any headers in the `distance`, `neighbors`, `cluster` and `spatial` directories, and use the RAFT versions at your own risk.
![RAFT tech stack](img/raft-tech-stack-vss.png)

Expand All @@ -27,7 +27,6 @@
- [RAFT Reference Documentation](https://docs.rapids.ai/api/raft/stable/): API Documentation.
- [RAFT Getting Started](./docs/source/quick_start.md): Getting started with RAFT.
- [Build and Install RAFT](./docs/source/build.md): Instructions for installing and building RAFT.
- [Example Notebooks](./notebooks): Example jupyter notebooks
- [RAPIDS Community](https://rapids.ai/community.html): Get help, contribute, and collaborate.
- [GitHub repository](https://github.com/rapidsai/raft): Download the RAFT source code.
- [Issue tracker](https://github.com/rapidsai/raft/issues): Report issues or request features.
Expand Down Expand Up @@ -120,13 +119,13 @@ auto metric = raft::distance::DistanceType::L2SqrtExpanded;
raft::distance::pairwise_distance(handle, input.view(), input.view(), output.view(), metric);
```
It's also possible to create `raft::device_mdspan` views to invoke the same API with raw pointers and shape information:
It's also possible to create `raft::device_mdspan` views to invoke the same API with raw pointers and shape information. Take this example from the [NVIDIA cuVS](https://github.com/rapidsai/cuvs) library:
```c++
#include <raft/core/device_resources.hpp>
#include <raft/core/device_mdspan.hpp>
#include <raft/random/make_blobs.cuh>
#include <raft/distance/distance.cuh>
#include <cuvs/distance/distance.hpp>
raft::device_resources handle;
Expand All @@ -147,21 +146,21 @@ auto output_view = raft::make_device_matrix_view(output, n_samples, n_samples);
raft::random::make_blobs(handle, input_view, labels_view);
auto metric = raft::distance::DistanceType::L2SqrtExpanded;
raft::distance::pairwise_distance(handle, input_view, input_view, output_view, metric);
auto metric = cuvs::distance::DistanceType::L2SqrtExpanded;
cuvs::distance::pairwise_distance(handle, input_view, input_view, output_view, metric);
```


### Python Example

The `pylibraft` package contains a Python API for RAFT algorithms and primitives. `pylibraft` integrates nicely into other libraries by being very lightweight with minimal dependencies and accepting any object that supports the `__cuda_array_interface__`, such as [CuPy's ndarray](https://docs.cupy.dev/en/stable/user_guide/interoperability.html#rmm). The number of RAFT algorithms exposed in this package is continuing to grow from release to release.

The example below demonstrates computing the pairwise Euclidean distances between CuPy arrays. Note that CuPy is not a required dependency for `pylibraft`.
The example below demonstrates computing the pairwise Euclidean distances between CuPy arrays using the [NVIDIA cuVS](https://github.com/rapidsai/cuvs) library. Note that CuPy is not a required dependency for `pylibraft`.

```python
import cupy as cp

from pylibraft.distance import pairwise_distance
from cuvs.distance import pairwise_distance

n_samples = 5000
n_features = 50
Expand Down Expand Up @@ -208,7 +207,7 @@ pylibraft.config.set_output_as(lambda device_ndarray: return device_ndarray.copy
```python
import cupy as cp

from pylibraft.distance import pairwise_distance
from cuvs.distance import pairwise_distance

n_samples = 5000
n_features = 50
Expand All @@ -223,18 +222,15 @@ pairwise_distance(in1, in2, out=output, metric="euclidean")

## Installing

RAFT's C++ and Python libraries can both be installed through Conda and the Python libraries through Pip.
RAFT's C++ and Python libraries can both be installed through Conda and the Python libraries through Pip.


### Installing C++ and Python through Conda

The easiest way to install RAFT is through conda and several packages are provided.
- `libraft-headers` C++ headers
- `libraft` (optional) C++ shared library containing pre-compiled template instantiations and runtime API.
- `pylibraft` (optional) Python library
- `raft-dask` (optional) Python library for deployment of multi-node multi-GPU algorithms that use the RAFT `raft::comms` abstraction layer in Dask clusters.
- `raft-ann-bench` (optional) Benchmarking tool for easily producing benchmarks that compare RAFT's vector search algorithms against other state-of-the-art implementations.
- `raft-ann-bench-cpu` (optional) Reproducible benchmarking tool similar to above, but doesn't require CUDA to be installed on the machine. Can be used to test in environments with competitive CPUs.

Use the following command, depending on your CUDA version, to install all of the RAFT packages with conda (replace `rapidsai` with `rapidsai-nightly` to install more up-to-date but less stable nightly packages). `mamba` is preferred over the `conda` command.
```bash
Expand All @@ -255,8 +251,6 @@ You can also install the conda packages individually using the `mamba` command a
mamba install -c rapidsai -c conda-forge -c nvidia libraft libraft-headers cuda-version=12.5
```

If installing the C++ APIs please see [using libraft](https://docs.rapids.ai/api/raft/nightly/using_libraft/) for more information on using the pre-compiled shared library. You can also refer to the [example C++ template project](https://github.com/rapidsai/raft/tree/branch-25.02/cpp/template) for a ready-to-go CMake configuration that you can drop into your project and build against installed RAFT development artifacts above.

### Installing Python through Pip

`pylibraft` and `raft-dask` both have experimental packages that can be [installed through pip](https://rapids.ai/pip.html#install):
Expand All @@ -265,12 +259,10 @@ pip install pylibraft-cu11 --extra-index-url=https://pypi.nvidia.com
pip install raft-dask-cu11 --extra-index-url=https://pypi.nvidia.com
```

These packages statically build RAFT's pre-compiled instantiations and so the C++ headers and pre-compiled shared library won't be readily available to use in your code.
These packages statically build RAFT's pre-compiled instantiations and so the C++ headers won't be readily available to use in your code.

The [build instructions](https://docs.rapids.ai/api/raft/nightly/build/) contain more details on building RAFT from source and including it in downstream projects. You can also find a more comprehensive version of the above CPM code snippet the [Building RAFT C++ and Python from source](https://docs.rapids.ai/api/raft/nightly/build/#building-c-and-python-from-source) section of the build instructions.

You can find an example [RAFT project template](cpp/template/README.md) in the `cpp/template` directory, which demonstrates how to build a new application with RAFT or incorporate RAFT into an existing CMake project.


## Contributing

Expand All @@ -284,7 +276,7 @@ When citing RAFT generally, please consider referencing this Github project.
title={Rapidsai/raft: RAFT contains fundamental widely-used algorithms and primitives for data science, Graph and machine learning.},
url={https://github.com/rapidsai/raft},
journal={GitHub},
publisher={Nvidia RAPIDS},
publisher={NVIDIA RAPIDS},
author={Rapidsai},
year={2022}
}
Expand Down
59 changes: 5 additions & 54 deletions build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,8 @@ ARGS=$*
# scripts, and that this script resides in the repo dir!
REPODIR=$(cd $(dirname $0); pwd)

VALIDARGS="clean libraft pylibraft raft-dask docs tests template bench-prims bench-ann clean --uninstall -v -g -n --compile-lib --compile-static-lib --allgpuarch --no-nvtx --cpu-only --show_depr_warn --incl-cache-stats --time -h"
HELP="$0 [<target> ...] [<flag> ...] [--cmake-args=\"<args>\"] [--cache-tool=<tool>] [--limit-tests=<targets>] [--limit-bench-prims=<targets>] [--limit-bench-ann=<targets>] [--build-metrics=<filename>]
VALIDARGS="clean libraft pylibraft raft-dask docs tests bench-prims clean --uninstall -v -g -n --compile-lib --compile-static-lib --allgpuarch --no-nvtx --show_depr_warn --incl-cache-stats --time -h"
HELP="$0 [<target> ...] [<flag> ...] [--cmake-args=\"<args>\"] [--cache-tool=<tool>] [--limit-tests=<targets>] [--limit-bench-prims=<targets>] [--build-metrics=<filename>]
where <target> is:
clean - remove all existing build artifacts and configuration (start over)
libraft - build the raft C++ code only. Also builds the C-wrapper library
Expand All @@ -29,8 +29,6 @@ HELP="$0 [<target> ...] [<flag> ...] [--cmake-args=\"<args>\"] [--cache-tool=<to
docs - build the documentation
tests - build the tests
bench-prims - build micro-benchmarks for primitives
bench-ann - build end-to-end ann benchmarks
template - build the example RAFT application template
and <flag> is:
-v - verbose build mode
Expand All @@ -39,10 +37,8 @@ HELP="$0 [<target> ...] [<flag> ...] [--cmake-args=\"<args>\"] [--cache-tool=<to
--uninstall - uninstall files for specified targets which were built and installed prior
--compile-lib - compile shared library for all components
--compile-static-lib - compile static library for all components
--cpu-only - build CPU only components without CUDA. Applies to bench-ann only currently.
--limit-tests - semicolon-separated list of test executables to compile (e.g. NEIGHBORS_TEST;CLUSTER_TEST)
--limit-bench-prims - semicolon-separated list of prims benchmark executables to compute (e.g. NEIGHBORS_PRIMS_BENCH;CLUSTER_PRIMS_BENCH)
--limit-bench-ann - semicolon-separated list of ann benchmark executables to compute (e.g. HNSWLIB_ANN_BENCH;RAFT_IVF_PQ_ANN_BENCH)
--allgpuarch - build for all supported GPU architectures
--no-nvtx - disable nvtx (profiling markers), but allow enabling it in downstream projects
--show_depr_warn - show cmake deprecation warnings
Expand Down Expand Up @@ -71,15 +67,13 @@ BUILD_ALL_GPU_ARCH=0
BUILD_TESTS=OFF
BUILD_TYPE=Release
BUILD_PRIMS_BENCH=OFF
BUILD_ANN_BENCH=OFF
BUILD_CPU_ONLY=OFF
COMPILE_LIBRARY=OFF
INSTALL_TARGET=install
BUILD_REPORT_METRICS=""
BUILD_REPORT_INCL_CACHE_STATS=OFF

TEST_TARGETS="CLUSTER_TEST;CORE_TEST;DISTANCE_TEST;LABEL_TEST;LINALG_TEST;MATRIX_TEST;NEIGHBORS_TEST;NEIGHBORS_ANN_BRUTE_FORCE_TEST;NEIGHBORS_ANN_CAGRA_TEST;NEIGHBORS_ANN_NN_DESCENT_TEST;NEIGHBORS_ANN_IVF_TEST;RANDOM_TEST;SOLVERS_TEST;SPARSE_TEST;SPARSE_DIST_TEST;SPARSE_NEIGHBORS_TEST;STATS_TEST;UTILS_TEST"
BENCH_TARGETS="CLUSTER_BENCH;CORE_BENCH;NEIGHBORS_BENCH;DISTANCE_BENCH;LINALG_BENCH;MATRIX_BENCH;SPARSE_BENCH;RANDOM_BENCH"
TEST_TARGETS="CORE_TEST;LABEL_TEST;LINALG_TEST;MATRIX_TEST;RANDOM_TEST;SOLVERS_TEST;SPARSE_TEST;STATS_TEST;UTILS_TEST"
BENCH_TARGETS="CORE_BENCH;LINALG_BENCH;MATRIX_BENCH;SPARSE_BENCH;RANDOM_BENCH"

CACHE_ARGS=""
NVTX=ON
Expand Down Expand Up @@ -174,21 +168,6 @@ function limitBench {
fi
}

function limitAnnBench {
# Check for option to limit the set of test binaries to build
if [[ -n $(echo $ARGS | { grep -E "\-\-limit\-bench-ann" || true; } ) ]]; then
# There are possible weird edge cases that may cause this regex filter to output nothing and fail silently
# the true pipe will catch any weird edge cases that may happen and will cause the program to fall back
# on the invalid option error
LIMIT_ANN_BENCH_TARGETS=$(echo $ARGS | sed -e 's/.*--limit-bench-ann=//' -e 's/ .*//')
if [[ -n ${LIMIT_ANN_BENCH_TARGETS} ]]; then
# Remove the full LIMIT_TEST_TARGETS argument from list of args so that it passes validArgs function
ARGS=${ARGS//--limit-bench-ann=$LIMIT_ANN_BENCH_TARGETS/}
ANN_BENCH_TARGETS=${LIMIT_ANN_BENCH_TARGETS}
fi
fi
}

function buildMetrics {
# Check for multiple build-metrics options
if [[ $(echo $ARGS | { grep -Eo "\-\-build\-metrics" || true; } | wc -l ) -gt 1 ]]; then
Expand Down Expand Up @@ -219,7 +198,6 @@ if (( ${NUMARGS} != 0 )); then
cacheTool
limitTests
limitBench
limitAnnBench
buildMetrics
for a in ${ARGS}; do
if ! (echo " ${VALIDARGS} " | grep -q " ${a} "); then
Expand Down Expand Up @@ -349,18 +327,6 @@ if hasArg bench-prims || (( ${NUMARGS} == 0 )); then
fi
fi

if hasArg bench-ann || (( ${NUMARGS} == 0 )); then
BUILD_ANN_BENCH=ON
CMAKE_TARGET="${CMAKE_TARGET};${ANN_BENCH_TARGETS}"
if hasArg --cpu-only; then
COMPILE_LIBRARY=OFF
BUILD_CPU_ONLY=ON
NVTX=OFF
else
COMPILE_LIBRARY=ON
fi
fi

if hasArg --no-nvtx; then
NVTX=OFF
fi
Expand Down Expand Up @@ -405,7 +371,7 @@ fi

################################################################################
# Configure for building all C++ targets
if (( ${NUMARGS} == 0 )) || hasArg libraft || hasArg docs || hasArg tests || hasArg bench-prims || hasArg bench-ann; then
if (( ${NUMARGS} == 0 )) || hasArg libraft || hasArg docs || hasArg tests || hasArg bench-prims; then
if (( ${BUILD_ALL_GPU_ARCH} == 0 )); then
RAFT_CMAKE_CUDA_ARCHITECTURES="NATIVE"
echo "Building for the architecture of the GPU in the system..."
Expand All @@ -432,8 +398,6 @@ if (( ${NUMARGS} == 0 )) || hasArg libraft || hasArg docs || hasArg tests || has
-DDISABLE_DEPRECATION_WARNINGS=${DISABLE_DEPRECATION_WARNINGS} \
-DBUILD_TESTS=${BUILD_TESTS} \
-DBUILD_PRIMS_BENCH=${BUILD_PRIMS_BENCH} \
-DBUILD_ANN_BENCH=${BUILD_ANN_BENCH} \
-DBUILD_CPU_ONLY=${BUILD_CPU_ONLY} \
-DCMAKE_MESSAGE_LOG_LEVEL=${CMAKE_LOG_LEVEL} \
${CACHE_ARGS} \
${EXTRA_CMAKE_ARGS}
Expand Down Expand Up @@ -505,11 +469,6 @@ if (( ${NUMARGS} == 0 )) || hasArg raft-dask; then
python -m pip install --no-build-isolation --no-deps --config-settings rapidsai.disable-cuda=true ${REPODIR}/python/raft-dask
fi

# Build and (optionally) install the raft-ann-bench Python package
if (( ${NUMARGS} == 0 )) || hasArg bench-ann; then
python -m pip install --no-build-isolation --no-deps --config-settings rapidsai.disable-cuda=true ${REPODIR}/python/raft-ann-bench -vvv
fi

if hasArg docs; then
set -x
export RAPIDS_VERSION="$(sed -E -e 's/^([0-9]{2})\.([0-9]{2})\.([0-9]{2}).*$/\1.\2.\3/' "${REPODIR}/VERSION")"
Expand All @@ -520,11 +479,3 @@ if hasArg docs; then
sphinx-build -b html source _html
fi

################################################################################
# Initiate build for example RAFT application template (if needed)

if hasArg template; then
pushd ${REPODIR}/cpp/template
./build.sh
popd
fi
25 changes: 0 additions & 25 deletions ci/build_python.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,30 +41,5 @@ rapids-conda-retry mambabuild \
conda/recipes/raft-dask

sccache --show-adv-stats
sccache --zero-stats

# Build ann-bench for each cuda and python version
rapids-conda-retry mambabuild \
--no-test \
--channel "${CPP_CHANNEL}" \
--channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \
conda/recipes/raft-ann-bench

sccache --show-adv-stats

# Build ann-bench-cpu only in CUDA 11 jobs since it only depends on python
# version
RAPIDS_CUDA_MAJOR="${RAPIDS_CUDA_VERSION%%.*}"
if [[ ${RAPIDS_CUDA_MAJOR} == "11" ]]; then
sccache --zero-stats

rapids-conda-retry mambabuild \
--no-test \
--channel "${CPP_CHANNEL}" \
--channel "${RAPIDS_CONDA_BLD_OUTPUT_DIR}" \
conda/recipes/raft-ann-bench-cpu

sccache --show-adv-stats
fi

rapids-upload-conda-to-s3 python
Loading

0 comments on commit cd18e64

Please sign in to comment.