diff --git a/README.md b/README.md
index ccd0df4926..773d98e23a 100755
--- a/README.md
+++ b/README.md
@@ -1,7 +1,5 @@
#
RAFT: Reusable Accelerated Functions and Tools
-[![Build Status](https://gpuci.gpuopenanalytics.com/job/rapidsai/job/gpuci/job/raft/job/branches/job/raft-branch-pipeline/badge/icon)](https://gpuci.gpuopenanalytics.com/job/rapidsai/job/gpuci/job/raft/job/branches/job/raft-branch-pipeline/)
-
## Resources
- [RAFT Reference Documentation](https://docs.rapids.ai/api/raft/stable/): API Documentation.
@@ -13,9 +11,9 @@
## Overview
-RAFT contains fundamental widely-used algorithms and primitives for data science and machine learning. The algorithms are CUDA-accelerated and form building-blocks for rapidly composing analytics.
+RAFT contains fundamental widely-used algorithms and primitives for data science and machine learning. The algorithms are CUDA-accelerated and form building blocks for rapidly composing analytics.
-By taking a primitives-based approach to algorithm development, RAFT
+By taking a primitives-based approach to algorithm development, RAFT
- accelerates algorithm construction time
- reduces the maintenance burden by maximizing reuse across projects, and
- centralizes core reusable computations, allowing future optimizations to benefit all algorithms that use them.
@@ -48,7 +46,7 @@ RAFT relies heavily on RMM which eases the burden of configuring different alloc
### Multi-dimensional Arrays
-The APIs in RAFT currently accept raw pointers to device memory and we are in the process of simplifying the APIs with the [mdspan](https://arxiv.org/abs/2010.06474) multi-dimensional array view for representing data in higher dimensions similar to the `ndarray` in the Numpy Python library. RAFT also contains the corresponding owning `mdarray` structure, which simplifies the allocation and management of multi-dimensional data in both host and device (GPU) memory.
+The APIs in RAFT currently accept raw pointers to device memory and we are in the process of simplifying the APIs with the [mdspan](https://arxiv.org/abs/2010.06474) multi-dimensional array view for representing data in higher dimensions similar to the `ndarray` in the Numpy Python library. RAFT also contains the corresponding owning `mdarray` structure, which simplifies the allocation and management of multi-dimensional data in both host and device (GPU) memory.
The `mdarray` forms a convenience layer over RMM and can be constructed in RAFT using a number of different helper functions:
@@ -188,7 +186,7 @@ pairwise_distance(in1, in2, out=output, metric="euclidean")
## Installing
-RAFT itself can be installed through conda, [Cmake Package Manager (CPM)](https://github.com/cpm-cmake/CPM.cmake), pip, or by building the repository from source. Please refer to the [build instructions](docs/source/build.md) for more a comprehensive guide on installing and building RAFT and using it in downstream projects.
+RAFT itself can be installed through conda, [CMake Package Manager (CPM)](https://github.com/cpm-cmake/CPM.cmake), pip, or by building the repository from source. Please refer to the [build instructions](docs/source/build.md) for more a comprehensive guide on installing and building RAFT and using it in downstream projects.
### Conda
@@ -216,9 +214,9 @@ pip install pylibraft-cu11 --extra-index-url=https://pypi.ngc.nvidia.com
pip install raft-dask-cu11 --extra-index-url=https://pypi.ngc.nvidia.com
```
-### Cmake & CPM
+### CMake & CPM
-RAFT uses the [RAPIDS-CMake](https://github.com/rapidsai/rapids-cmake) library, which makes it simple to include in downstream cmake projects. RAPIDS CMake provides a convenience layer around CPM.
+RAFT uses the [RAPIDS-CMake](https://github.com/rapidsai/rapids-cmake) library, which makes it simple to include in downstream cmake projects. RAPIDS CMake provides a convenience layer around CPM.
After [installing](https://github.com/rapidsai/rapids-cmake#installation) rapids-cmake in your project, you can begin using RAFT by placing the code snippet below in a file named `get_raft.cmake` and including it in your cmake build with `include(get_raft.cmake)`. This will make available several targets to add to configure the link libraries for your artifacts.
@@ -292,14 +290,14 @@ The folder structure mirrors other RAPIDS repos, with the following folders:
- `ci`: Scripts for running CI in PRs
- `conda`: Conda recipes and development conda environments
-- `cpp`: Source code for C++ libraries.
+- `cpp`: Source code for C++ libraries.
- `bench`: Benchmarks source code
- - `cmake`: Cmake modules and templates
+ - `cmake`: CMake modules and templates
- `doxygen`: Doxygen configuration
- `include`: The C++ API headers are fully-contained here (deprecated directories are excluded from the listing below)
- `cluster`: Basic clustering primitives and algorithms.
- `comms`: A multi-node multi-GPU communications abstraction layer for NCCL+UCX and MPI+NCCL, which can be deployed in Dask clusters using the `raft-dask` Python package.
- - `core`: Core API headers which require minimal dependencies aside from RMM and Cudatoolkit. These are safe to expose on public APIs and do not require `nvcc` to build. This is the same for any headers in RAFT which have the suffix `*_types.hpp`.
+ - `core`: Core API headers which require minimal dependencies aside from RMM and Cudatoolkit. These are safe to expose on public APIs and do not require `nvcc` to build. This is the same for any headers in RAFT which have the suffix `*_types.hpp`.
- `distance`: Distance primitives
- `linalg`: Dense linear algebra
- `matrix`: Dense matrix operations
@@ -327,17 +325,17 @@ The folder structure mirrors other RAPIDS repos, with the following folders:
## Contributing
-If you are interested in contributing to the RAFT project, please read our [Contributing guidelines](docs/source/contributing.md). Refer to the [Developer Guide](docs/source/developer_guide.md) for details on the developer guidelines, workflows, and principals.
+If you are interested in contributing to the RAFT project, please read our [Contributing guidelines](docs/source/contributing.md). Refer to the [Developer Guide](docs/source/developer_guide.md) for details on the developer guidelines, workflows, and principals.
## References
When citing RAFT generally, please consider referencing this Github project.
```bibtex
-@misc{rapidsai,
+@misc{rapidsai,
title={Rapidsai/raft: RAFT contains fundamental widely-used algorithms and primitives for data science, Graph and machine learning.},
- url={https://github.com/rapidsai/raft},
- journal={GitHub},
- publisher={Nvidia RAPIDS},
+ url={https://github.com/rapidsai/raft},
+ journal={GitHub},
+ publisher={Nvidia RAPIDS},
author={Rapidsai},
year={2022}
}
diff --git a/ci/checks/style.sh b/ci/checks/style.sh
deleted file mode 100644
index f8fcbe19f8..0000000000
--- a/ci/checks/style.sh
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2020-2022, NVIDIA CORPORATION.
-#####################
-# RAFT Style Tester #
-#####################
-
-# Ignore errors and set path
-set +e
-PATH=/opt/conda/bin:$PATH
-
-# Activate common conda env
-. /opt/conda/etc/profile.d/conda.sh
-conda activate rapids
-
-FORMAT_FILE_URL=https://raw.githubusercontent.com/rapidsai/rapids-cmake/branch-22.12/cmake-format-rapids-cmake.json
-export RAPIDS_CMAKE_FORMAT_FILE=/tmp/rapids_cmake_ci/cmake-formats-rapids-cmake.json
-mkdir -p $(dirname ${RAPIDS_CMAKE_FORMAT_FILE})
-wget -O ${RAPIDS_CMAKE_FORMAT_FILE} ${FORMAT_FILE_URL}
-
-# Run pre-commit checks
-pre-commit run --hook-stage manual --all-files
-
-exit $RETVAL
diff --git a/ci/cpu/build.sh b/ci/cpu/build.sh
deleted file mode 100755
index 5bb09520a8..0000000000
--- a/ci/cpu/build.sh
+++ /dev/null
@@ -1,135 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2022-2023, NVIDIA CORPORATION.
-#########################################
-# RAFT CPU conda build script for CI #
-#########################################
-set -e
-
-# Set path and build parallel level
-# openmpi dir is required on CentOS for finding MPI libs from cmake
-if [[ -e /etc/os-release ]] && (grep -qi centos /etc/os-release); then
- export PATH=/opt/conda/bin:/usr/local/cuda/bin:/usr/lib64/openmpi/bin:$PATH
-else
- export PATH=/opt/conda/bin:/usr/local/cuda/bin:$PATH
-fi
-export PARALLEL_LEVEL=${PARALLEL_LEVEL:-8}
-
-# Set home to the job's workspace
-export HOME=$WORKSPACE
-
-# Switch to project root; also root of repo checkout
-cd $WORKSPACE
-
-# If nightly build, append current YYMMDD to version
-if [[ "$BUILD_MODE" = "branch" && "$SOURCE_BRANCH" = branch-* ]] ; then
- export VERSION_SUFFIX=$(date +%y%m%d)
-else
- export VERSION_SUFFIX=""
-fi
-
-# Setup 'gpuci_conda_retry' for build retries (results in 2 total attempts)
-export GPUCI_CONDA_RETRY_MAX=1
-export GPUCI_CONDA_RETRY_SLEEP=30
-
-# Workaround to keep Jenkins builds working
-# until we migrate fully to GitHub Actions
-export RAPIDS_CUDA_VERSION="${CUDA}"
-export SCCACHE_BUCKET=rapids-sccache
-export SCCACHE_REGION=us-west-2
-export SCCACHE_IDLE_TIMEOUT=32768
-
-# Use Ninja to build
-export CMAKE_GENERATOR="Ninja"
-export CONDA_BLD_DIR="${WORKSPACE}/.conda-bld"
-
-# ucx-py version
-export UCX_PY_VERSION='0.31.*'
-
-################################################################################
-# SETUP - Check environment
-################################################################################
-
-gpuci_logger "Check environment variables"
-env
-
-gpuci_logger "Activate conda env"
-. /opt/conda/etc/profile.d/conda.sh
-conda activate rapids
-
-# Remove rapidsai-nightly channel if we are building main branch
-if [ "$SOURCE_BRANCH" = "main" ]; then
- conda config --system --remove channels rapidsai-nightly
-fi
-
-gpuci_logger "Check versions"
-python --version
-$CC --version
-$CXX --version
-
-gpuci_logger "Check conda environment"
-conda info
-conda config --show-sources
-conda list --show-channel-urls
-
-# FIX Added to deal with Anancoda SSL verification issues during conda builds
-conda config --set ssl_verify False
-
-if [ "$BUILD_LIBRAFT" == "1" ]; then
- # If we are doing CUDA builds, libraft package is located at ${CONDA_BLD_DIR}
- CONDA_LOCAL_CHANNEL="${CONDA_BLD_DIR}"
-else
- # If we are doing Python builds only, libraft package is placed here by Project Flash
- CONDA_LOCAL_CHANNEL="ci/artifacts/raft/cpu/.conda-bld/"
-fi
-
-gpuci_mamba_retry install -c conda-forge boa
-
-###############################################################################
-# BUILD - Conda package builds
-###############################################################################
-
-if [ "$BUILD_LIBRAFT" == "1" ]; then
- gpuci_logger "Building conda packages for libraft-nn, libraft-distance, libraft-headers and libraft-tests"
- if [[ -z "$PROJECT_FLASH" || "$PROJECT_FLASH" == "0" ]]; then
- gpuci_conda_retry mambabuild --no-build-id --croot ${CONDA_BLD_DIR} conda/recipes/libraft
- else
- gpuci_conda_retry mambabuild --no-build-id --croot ${CONDA_BLD_DIR} --dirty --no-remove-work-dir conda/recipes/libraft
- gpuci_logger "`ls ${CONDA_BLD_DIR}/work`"
- mkdir -p ${CONDA_BLD_DIR}/libraft/work
- mv ${CONDA_BLD_DIR}/work ${CONDA_BLD_DIR}/libraft/work
- fi
- sccache --show-stats
-else
- gpuci_logger "SKIPPING build of conda packages for libraft-nn, libraft-distance, libraft-headers and libraft-tests"
-
- # Install pre-built conda packages from previous CI step
- gpuci_logger "Install libraft conda packages from CPU job"
- CONDA_ARTIFACT_PATH=${WORKSPACE}/ci/artifacts/raft/cpu/.conda-bld/ # notice there is no `linux-64` here
- gpuci_mamba_retry install -y -c ${CONDA_ARTIFACT_PATH} libraft-headers libraft-distance libraft-nn libraft-tests
-fi
-
-if [ "$BUILD_RAFT" == '1' ]; then
- gpuci_logger "Building Python conda packages for raft"
- if [[ -z "$PROJECT_FLASH" || "$PROJECT_FLASH" == "0" ]]; then
- gpuci_conda_retry mambabuild --no-build-id --croot ${CONDA_BLD_DIR} conda/recipes/pylibraft --python=$PYTHON
- gpuci_conda_retry mambabuild --no-build-id --croot ${CONDA_BLD_DIR} conda/recipes/raft-dask --python=$PYTHON
- else
- gpuci_conda_retry mambabuild --no-build-id --croot ${CONDA_BLD_DIR} conda/recipes/pylibraft -c ${CONDA_LOCAL_CHANNEL} --dirty --no-remove-work-dir --python=$PYTHON
- mkdir -p ${CONDA_BLD_DIR}/pylibraft/work
- mv ${CONDA_BLD_DIR}/work ${CONDA_BLD_DIR}/pylibraft/work
-
- gpuci_conda_retry mambabuild --no-build-id --croot ${CONDA_BLD_DIR} conda/recipes/raft-dask -c ${CONDA_LOCAL_CHANNEL} --dirty --no-remove-work-dir --python=$PYTHON
- mkdir -p ${CONDA_BLD_DIR}/raft-dask/work
- mv ${CONDA_BLD_DIR}/work ${CONDA_BLD_DIR}/raft-dask/work
- fi
-else
- gpuci_logger "SKIPPING build of Python conda packages for raft"
-fi
-
-################################################################################
-# UPLOAD - Conda packages
-################################################################################
-
-# Uploads disabled due to new GH Actions implementation
-# gpuci_logger "Upload conda packages"
-# source ci/cpu/upload.sh
diff --git a/ci/cpu/prebuild.sh b/ci/cpu/prebuild.sh
deleted file mode 100755
index ea12bf8b35..0000000000
--- a/ci/cpu/prebuild.sh
+++ /dev/null
@@ -1,22 +0,0 @@
-#!/usr/bin/env bash
-# Copyright (c) 2022, NVIDIA CORPORATION.
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-export UPLOAD_RAFT=1
-export UPLOAD_LIBRAFT=1
-
-if [[ -z "$PROJECT_FLASH" || "$PROJECT_FLASH" == "0" ]]; then
- #If project flash is not activate, always build both
- export BUILD_RAFT=1
- export BUILD_LIBRAFT=1
-fi
diff --git a/ci/cpu/upload.sh b/ci/cpu/upload.sh
deleted file mode 100755
index cce7f4edef..0000000000
--- a/ci/cpu/upload.sh
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/bin/bash
-# Copyright (c) 2022, NVIDIA CORPORATION.
-#
-# Adopted from https://github.com/tmcdonell/travis-scripts/blob/dfaac280ac2082cd6bcaba3217428347899f2975/update-accelerate-buildbot.sh
-
-set -e
-
-# Setup 'gpuci_retry' for upload retries (results in 4 total attempts)
-export GPUCI_RETRY_MAX=3
-export GPUCI_RETRY_SLEEP=30
-
-# Set label option.
-#LABEL_OPTION="--label testing"
-LABEL_OPTION="--label main"
-
-# Skip uploads unless BUILD_MODE == "branch"
-if [ ${BUILD_MODE} != "branch" ]; then
- echo "Skipping upload"
- return 0
-fi
-
-# Skip uploads if there is no upload key
-if [ -z "$MY_UPLOAD_KEY" ]; then
- echo "No upload key"
- return 0
-fi
-
-################################################################################
-# UPLOAD - Conda packages
-################################################################################
-
-gpuci_logger "Starting conda uploads"
-
-if [[ "$BUILD_LIBRAFT" == "1" && "$UPLOAD_LIBRAFT" == "1" ]]; then
- LIBRAFT_FILES=$(conda build --no-build-id --croot ${CONDA_BLD_DIR} -c ${CONDA_LOCAL_CHANNEL} conda/recipes/libraft --output)
- echo "Upload libraft-headers, libraft-nn, libraft-distance and libraft-tests"
- gpuci_retry anaconda -t ${MY_UPLOAD_KEY} upload -u ${CONDA_USERNAME:-rapidsai} ${LABEL_OPTION} --skip-existing --no-progress ${LIBRAFT_FILES}
-fi
-
-if [[ "$BUILD_RAFT" == "1" && "$UPLOAD_RAFT" == "1" ]]; then
- RAFT_DASK_FILE=$(conda build --no-build-id --croot ${CONDA_BLD_DIR} -c ${CONDA_LOCAL_CHANNEL} conda/recipes/raft-dask --python=$PYTHON --output)
- PYLIBRAFT_FILE=$(conda build --no-build-id --croot ${CONDA_BLD_DIR} -c ${CONDA_LOCAL_CHANNEL} conda/recipes/pylibraft --python=$PYTHON --output)
- test -e ${RAFT_DASK_FILE}
- echo "Upload raft-dask"
- echo ${RAFT_DASK_FILE}
- gpuci_retry anaconda -t ${MY_UPLOAD_KEY} upload -u ${CONDA_USERNAME:-rapidsai} ${LABEL_OPTION} --skip-existing ${RAFT_DASK_FILE} --no-progress
-
- test -e ${PYLIBRAFT_FILE}
- echo "Upload pylibraft"
- echo ${PYLIBRAFT_FILE}
- gpuci_retry anaconda -t ${MY_UPLOAD_KEY} upload -u ${CONDA_USERNAME:-rapidsai} ${LABEL_OPTION} --skip-existing ${PYLIBRAFT_FILE} --no-progress
-fi
diff --git a/ci/gpu/build.sh b/ci/gpu/build.sh
deleted file mode 100644
index 78b860a0f1..0000000000
--- a/ci/gpu/build.sh
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/bin/bash
-
-
-# Copyright (c) 2020-2023, NVIDIA CORPORATION.
-#########################################
-# RAFT GPU build and test script for CI #
-#########################################
-
-set -e
-NUMARGS=$#
-ARGS=$*
-
-# Arg parsing function
-function hasArg {
- (( ${NUMARGS} != 0 )) && (echo " ${ARGS} " | grep -q " $1 ")
-}
-
-# Set path and build parallel level
-export PATH=/opt/conda/bin:/usr/local/cuda/bin:$PATH
-export PARALLEL_LEVEL=${PARALLEL_LEVEL:-8}
-export CUDA_REL=${CUDA_VERSION%.*}
-CONDA_ARTIFACT_PATH=${WORKSPACE}/ci/artifacts/raft/cpu/.conda-bld/ # notice there is no `linux-64` here
-
-# Workaround to keep Jenkins builds working
-# until we migrate fully to GitHub Actions
-export RAPIDS_CUDA_VERSION="${CUDA}"
-export SCCACHE_BUCKET=rapids-sccache
-export SCCACHE_REGION=us-west-2
-export SCCACHE_IDLE_TIMEOUT=32768
-
-# Set home to the job's workspace
-export HOME=$WORKSPACE
-
-# Parse git describe
-cd $WORKSPACE
-export GIT_DESCRIBE_TAG=`git describe --tags`
-export MINOR_VERSION=`echo $GIT_DESCRIBE_TAG | grep -o -E '([0-9]+\.[0-9]+)'`
-unset GIT_DESCRIBE_TAG
-
-# ucx-py version
-export UCX_PY_VERSION='0.31.*'
-
-# Whether to install dask nightly or stable packages.
-export INSTALL_DASK_MAIN=0
-
-# Dask version to install when `INSTALL_DASK_MAIN=0`
-export DASK_STABLE_VERSION="2023.1.1"
-
-################################################################################
-# SETUP - Check environment
-################################################################################
-
-gpuci_logger "Check environment"
-env
-
-gpuci_logger "Check GPU usage"
-nvidia-smi
-
-gpuci_logger "Activate conda env"
-. /opt/conda/etc/profile.d/conda.sh
-conda activate rapids
-
-# Install pre-built conda packages from previous CI step
-gpuci_logger "Install libraft conda packages from CPU job"
-gpuci_mamba_retry install -y -c "${CONDA_ARTIFACT_PATH}" libraft-headers libraft-distance libraft-nn libraft-tests
-
-gpuci_logger "Check conda environment"
-conda info
-conda config --show-sources
-conda list --show-channel-urls
-
-################################################################################
-# BUILD - Build RAFT tests
-################################################################################
-
-gpuci_logger "Build and install Python targets"
-CONDA_BLD_DIR="$WORKSPACE/.conda-bld"
-gpuci_mamba_retry install boa
-
-# Install pylibraft first since it's a dependency of raft-dask
-gpuci_conda_retry mambabuild --no-build-id --croot "${CONDA_BLD_DIR}" conda/recipes/pylibraft -c "${CONDA_ARTIFACT_PATH}" --python="${PYTHON}"
-gpuci_mamba_retry install -y -c "${CONDA_BLD_DIR}" -c "${CONDA_ARTIFACT_PATH}" pylibraft
-
-gpuci_conda_retry mambabuild --no-build-id --croot "${CONDA_BLD_DIR}" conda/recipes/raft-dask -c "${CONDA_ARTIFACT_PATH}" --python="${PYTHON}"
-gpuci_mamba_retry install -y -c "${CONDA_BLD_DIR}" -c "${CONDA_ARTIFACT_PATH}" raft-dask
-
-################################################################################
-# TEST - Run GoogleTest and py.tests for RAFT
-################################################################################
-
-if hasArg --skip-tests; then
- gpuci_logger "Skipping Tests"
- exit 0
-fi
-
-set -x
-# Install latest nightly version for dask and distributed depending on `INSTALL_DASK_MAIN`
-if [[ "${INSTALL_DASK_MAIN}" == 1 ]]; then
- gpuci_logger "Installing dask and distributed from dask nightly channel"
- gpuci_mamba_retry install -c dask/label/dev \
- "dask/label/dev::dask" \
- "dask/label/dev::distributed"
-else
- gpuci_logger "gpuci_mamba_retry install conda-forge::dask==${DASK_STABLE_VERSION} conda-forge::distributed==${DASK_STABLE_VERSION} conda-forge::dask-core==${DASK_STABLE_VERSION} --force-reinstall"
- gpuci_mamba_retry install conda-forge::dask==${DASK_STABLE_VERSION} conda-forge::distributed==${DASK_STABLE_VERSION} conda-forge::dask-core==${DASK_STABLE_VERSION} --force-reinstall
-fi
-set +x
-
-gpuci_logger "Check GPU usage"
-nvidia-smi
-
-gpuci_logger "GoogleTest for libraft"
-GTEST_ARGS="xml:${WORKSPACE}/test-results/libraft/"
-for gt in "$CONDA_PREFIX/bin/gtests/libraft/"*; do
- test_name=$(basename $gt)
- echo "Running gtest $test_name"
- ${gt} ${GTEST_ARGS}
- echo "Ran gtest $test_name : return code was: $?, test script exit code is now: $EXITCODE"
-done
-
-
-gpuci_logger "Python pytest for pylibraft"
-cd "$WORKSPACE/python/pylibraft/pylibraft/test"
-pytest --cache-clear --junitxml="$WORKSPACE/junit-pylibraft.xml" -v -s
-
-gpuci_logger "Python pytest for raft-dask"
-cd "$WORKSPACE/python/raft-dask/raft_dask/test"
-pytest --cache-clear --junitxml="$WORKSPACE/junit-raft-dask.xml" -v -s
-
-if [ "$(arch)" = "x86_64" ]; then
- gpuci_logger "Building docs"
- gpuci_mamba_retry install "rapids-doc-env=${MINOR_VERSION}.*"
- "$WORKSPACE/build.sh" docs -v -n
-fi
diff --git a/ci/local/README.md b/ci/local/README.md
deleted file mode 100644
index bae3b278f0..0000000000
--- a/ci/local/README.md
+++ /dev/null
@@ -1,58 +0,0 @@
-## Purpose
-
-This script is designed for developer and contributor use. This tool mimics the actions of gpuCI on your local machine. This allows you to test and even debug your code inside a gpuCI base container before pushing your code as a GitHub commit.
-The script can be helpful in locally triaging and debugging RAPIDS continuous integration failures.
-
-## Requirements
-
-```
-nvidia-docker
-```
-
-## Usage
-
-```
-bash build.sh [-h] [-H] [-s] [-r ] [-i ]
-Build and test your local repository using a base gpuCI Docker image
-
-where:
- -H Show this help text
- -r Path to repository (defaults to working directory)
- -i Use Docker image (default is gpuci/rapidsai-base:cuda10.0-ubuntu16.04-gcc5-py3.6)
- -s Skip building and testing and start an interactive shell in a container of the Docker image
-```
-
-Example Usage:
-`bash build.sh -r ~/rapids/raft -i gpuci/rapidsai-base:cuda11.5-ubuntu20.04-py3.8`
-
-For a full list of available gpuCI docker images, visit our [DockerHub](https://hub.docker.com/r/gpuci/rapidsai-base/tags) page.
-
-Style Check:
-```bash
-$ bash ci/local/build.sh -r ~/rapids/raft -s
-$ . /opt/conda/etc/profile.d/conda.sh
-$ conda activate rapids #Activate gpuCI conda environment
-$ cd rapids
-$ flake8 python
-```
-
-## Information
-
-There are some caveats to be aware of when using this script, especially if you plan on developing from within the container itself.
-
-
-### Docker Image Build Repository
-
-The docker image will generate build artifacts in a folder on your machine located in the `root` directory of the repository you passed to the script. For the above example, the directory is named `~/rapids/raft/build_rapidsai-base_cuda9.2-ubuntu16.04-gcc5-py3.6/`. Feel free to remove this directory after the script is finished.
-
-*Note*: The script *will not* override your local build repository. Your local environment stays in tact.
-
-
-### Where The User is Dumped
-
-The script will build your repository and run all tests. If any tests fail, it dumps the user into the docker container itself to allow you to debug from within the container. If all the tests pass as expected the container exits and is automatically removed. Remember to exit the container if tests fail and you do not wish to debug within the container itself.
-
-
-### Container File Structure
-
-Your repository will be located in the `/rapids/` folder of the container. This folder is volume mounted from the local machine. Any changes to the code in this repository are replicated onto the local machine. The `cpp/build` and `python/build` directories within your repository is on a separate mount to avoid conflicting with your local build artifacts.
diff --git a/ci/local/build.sh b/ci/local/build.sh
deleted file mode 100644
index cdafd967c7..0000000000
--- a/ci/local/build.sh
+++ /dev/null
@@ -1,131 +0,0 @@
-#!/bin/bash
-
-GIT_DESCRIBE_TAG=`git describe --tags`
-MINOR_VERSION=`echo $GIT_DESCRIBE_TAG | grep -o -E '([0-9]+\.[0-9]+)'`
-
-DOCKER_IMAGE="gpuci/rapidsai:${MINOR_VERSION}-cuda10.1-devel-ubuntu16.04-py3.7"
-REPO_PATH=${PWD}
-RAPIDS_DIR_IN_CONTAINER="/rapids"
-CPP_BUILD_DIR="raft/build"
-PYTHON_BUILD_DIR="python/build"
-CONTAINER_SHELL_ONLY=0
-
-SHORTHELP="$(basename "$0") [-h] [-H] [-s] [-r ] [-i ]"
-LONGHELP="${SHORTHELP}
-Build and test your local repository using a base gpuCI Docker image
-
-where:
- -H Show this help text
- -r Path to repository (defaults to working directory)
- -i Use Docker image (default is ${DOCKER_IMAGE})
- -s Skip building and testing and start an interactive shell in a container of the Docker image
-"
-
-# Limit GPUs available to container based on CUDA_VISIBLE_DEVICES
-if [[ -z "${CUDA_VISIBLE_DEVICES}" ]]; then
- NVIDIA_VISIBLE_DEVICES="all"
-else
- NVIDIA_VISIBLE_DEVICES=${CUDA_VISIBLE_DEVICES}
-fi
-
-while getopts ":hHr:i:s" option; do
- case ${option} in
- r)
- REPO_PATH=${OPTARG}
- ;;
- i)
- DOCKER_IMAGE=${OPTARG}
- ;;
- s)
- CONTAINER_SHELL_ONLY=1
- ;;
- h)
- echo "${SHORTHELP}"
- exit 0
- ;;
- H)
- echo "${LONGHELP}"
- exit 0
- ;;
- *)
- echo "ERROR: Invalid flag"
- echo "${SHORTHELP}"
- exit 1
- ;;
- esac
-done
-
-REPO_PATH_IN_CONTAINER="${RAPIDS_DIR_IN_CONTAINER}/$(basename "${REPO_PATH}")"
-CPP_BUILD_DIR_IN_CONTAINER="${RAPIDS_DIR_IN_CONTAINER}/$(basename "${REPO_PATH}")/${CPP_BUILD_DIR}"
-PYTHON_BUILD_DIR_IN_CONTAINER="${RAPIDS_DIR_IN_CONTAINER}/$(basename "${REPO_PATH}")/${PYTHON_BUILD_DIR}"
-
-
-# BASE_CONTAINER_BUILD_DIR is named after the image name, allowing for
-# multiple image builds to coexist on the local filesystem. This will
-# be mapped to the typical BUILD_DIR inside of the container. Builds
-# running in the container generate build artifacts just as they would
-# in a bare-metal environment, and the host filesystem is able to
-# maintain the host build in BUILD_DIR as well.
-# shellcheck disable=SC2001,SC2005,SC2046
-BASE_CONTAINER_BUILD_DIR=${REPO_PATH}/build_$(echo $(basename "${DOCKER_IMAGE}")|sed -e 's/:/_/g')
-CPP_CONTAINER_BUILD_DIR=${BASE_CONTAINER_BUILD_DIR}/cpp
-PYTHON_CONTAINER_BUILD_DIR=${BASE_CONTAINER_BUILD_DIR}/python
-
-
-BUILD_SCRIPT="#!/bin/bash
-set -e
-WORKSPACE=${REPO_PATH_IN_CONTAINER}
-PREBUILD_SCRIPT=${REPO_PATH_IN_CONTAINER}/ci/gpu/prebuild.sh
-BUILD_SCRIPT=${REPO_PATH_IN_CONTAINER}/ci/gpu/build.sh
-cd "\$WORKSPACE"
-if [ -f \${PREBUILD_SCRIPT} ]; then
- source \${PREBUILD_SCRIPT}
-fi
-yes | source \${BUILD_SCRIPT}
-"
-
-if (( CONTAINER_SHELL_ONLY == 0 )); then
- COMMAND="${CPP_BUILD_DIR_IN_CONTAINER}/build.sh || bash"
-else
- COMMAND="bash"
-fi
-
-# Create the build dir for the container to mount, generate the build script inside of it
-mkdir -p "${BASE_CONTAINER_BUILD_DIR}"
-mkdir -p "${CPP_CONTAINER_BUILD_DIR}"
-mkdir -p "${PYTHON_CONTAINER_BUILD_DIR}"
-# Create build directories. This is to ensure correct owner for directories. If
-# directories don't exist there is side effect from docker volume mounting creating build
-# directories owned by root(volume mount point(s))
-mkdir -p "${REPO_PATH}/${CPP_BUILD_DIR}"
-mkdir -p "${REPO_PATH}/${PYTHON_BUILD_DIR}"
-
-echo "${BUILD_SCRIPT}" > "${CPP_CONTAINER_BUILD_DIR}/build.sh"
-chmod ugo+x "${CPP_CONTAINER_BUILD_DIR}/build.sh"
-PASSWD_FILE="/etc/passwd"
-GROUP_FILE="/etc/group"
-
-USER_FOUND=$(grep -wc "$(whoami)" < "$PASSWD_FILE")
-if [ "$USER_FOUND" == 0 ]; then
- echo "Local User not found, LDAP WAR for docker mounts activated. Creating dummy passwd and group"
- echo "files to allow docker resolve username and group"
- cp "$PASSWD_FILE" /tmp/passwd
- PASSWD_FILE="/tmp/passwd"
- cp "$GROUP_FILE" /tmp/group
- GROUP_FILE="/tmp/group"
- echo "$(whoami):x:$(id -u):$(id -g):$(whoami),,,:$HOME:$SHELL" >> "$PASSWD_FILE"
- echo "$(whoami):x:$(id -g):" >> "$GROUP_FILE"
-fi
-
-# Run the generated build script in a container
-docker pull "${DOCKER_IMAGE}"
-docker run --runtime=nvidia --rm -it -e NVIDIA_VISIBLE_DEVICES="${NVIDIA_VISIBLE_DEVICES}" \
- --user "$(id -u)":"$(id -g)" \
- -v "${REPO_PATH}:${REPO_PATH_IN_CONTAINER}" \
- -v "${CPP_CONTAINER_BUILD_DIR}:${CPP_BUILD_DIR_IN_CONTAINER}" \
- -v "${PYTHON_CONTAINER_BUILD_DIR}:${PYTHON_BUILD_DIR_IN_CONTAINER}" \
- -v "$PASSWD_FILE":/etc/passwd:ro \
- -v "$GROUP_FILE":/etc/group:ro \
- --cap-add=SYS_PTRACE \
- "${DOCKER_IMAGE}" bash -c "${COMMAND}"
-
diff --git a/ci/release/update-version.sh b/ci/release/update-version.sh
index 4bc96eebb2..be27e68218 100755
--- a/ci/release/update-version.sh
+++ b/ci/release/update-version.sh
@@ -17,7 +17,6 @@ CURRENT_MAJOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[1]}')
CURRENT_MINOR=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[2]}')
CURRENT_PATCH=$(echo $CURRENT_TAG | awk '{split($0, a, "."); print a[3]}')
CURRENT_SHORT_TAG=${CURRENT_MAJOR}.${CURRENT_MINOR}
-CURRENT_UCX_PY_VERSION="$(curl -sL https://version.gpuci.io/rapids/${CURRENT_SHORT_TAG}).*"
#Get . for next version
NEXT_MAJOR=$(echo $NEXT_FULL_TAG | awk '{split($0, a, "."); print a[1]}')
@@ -52,8 +51,6 @@ for FILE in conda/environments/*.yaml dependencies.yaml; do
sed_runner "s/ucx-py=.*/ucx-py=${NEXT_UCX_PY_VERSION}/g" ${FILE};
done
-sed_runner "s/export UCX_PY_VERSION=.*/export UCX_PY_VERSION='${NEXT_UCX_PY_VERSION}'/g" ci/gpu/build.sh
-sed_runner "s/export UCX_PY_VERSION=.*/export UCX_PY_VERSION='${NEXT_UCX_PY_VERSION}'/g" ci/cpu/build.sh
sed_runner "/^ucx_py_version:$/ {n;s/.*/ - \"${NEXT_UCX_PY_VERSION}\"/}" conda/recipes/raft-dask/conda_build_config.yaml
# Wheel builds install dask-cuda from source, update its branch
diff --git a/ci/test_cpp.sh b/ci/test_cpp.sh
index d8538bdf47..44e446d8f6 100755
--- a/ci/test_cpp.sh
+++ b/ci/test_cpp.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright (c) 2022, NVIDIA CORPORATION.
+# Copyright (c) 2022-2023, NVIDIA CORPORATION.
set -euo pipefail
@@ -21,7 +21,6 @@ set -u
CPP_CHANNEL=$(rapids-download-conda-from-s3 cpp)
RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"}/
mkdir -p "${RAPIDS_TESTS_DIR}"
-SUITEERROR=0
rapids-print-env
@@ -32,23 +31,17 @@ rapids-mamba-retry install \
rapids-logger "Check GPU usage"
nvidia-smi
+EXITCODE=0
+trap "EXITCODE=1" ERR
set +e
# Run libraft gtests from libraft-tests package
rapids-logger "Run gtests"
-
-# TODO: exit code handling is too verbose. Find a cleaner solution.
-
for gt in "$CONDA_PREFIX"/bin/gtests/libraft/* ; do
test_name=$(basename ${gt})
echo "Running gtest $test_name"
${gt} --gtest_output=xml:${RAPIDS_TESTS_DIR}
-
- exitcode=$?
- if (( ${exitcode} != 0 )); then
- SUITEERROR=${exitcode}
- echo "FAILED: GTest ${gt}"
- fi
done
-exit ${SUITEERROR}
+rapids-logger "Test script exiting with value: $EXITCODE"
+exit ${EXITCODE}
diff --git a/ci/test_python.sh b/ci/test_python.sh
index eb458d2a5a..934c9c6951 100755
--- a/ci/test_python.sh
+++ b/ci/test_python.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-# Copyright (c) 2022, NVIDIA CORPORATION.
+# Copyright (c) 2022-2023, NVIDIA CORPORATION.
set -euo pipefail
@@ -25,7 +25,6 @@ PYTHON_CHANNEL=$(rapids-download-conda-from-s3 python)
RAPIDS_TESTS_DIR=${RAPIDS_TESTS_DIR:-"${PWD}/test-results"}
RAPIDS_COVERAGE_DIR=${RAPIDS_COVERAGE_DIR:-"${PWD}/coverage-results"}
mkdir -p "${RAPIDS_TESTS_DIR}" "${RAPIDS_COVERAGE_DIR}"
-SUITEERROR=0
rapids-print-env
@@ -37,6 +36,8 @@ rapids-mamba-retry install \
rapids-logger "Check GPU usage"
nvidia-smi
+EXITCODE=0
+trap "EXITCODE=1" ERR
set +e
rapids-logger "pytest pylibraft"
@@ -49,12 +50,6 @@ pytest \
--cov-report=xml:"${RAPIDS_COVERAGE_DIR}/pylibraft-coverage.xml" \
--cov-report=term \
test
-exitcode=$?
-
-if (( ${exitcode} != 0 )); then
- SUITEERROR=${exitcode}
- echo "FAILED: 1 or more tests in pylibraft"
-fi
popd
rapids-logger "pytest raft-dask"
@@ -67,12 +62,7 @@ pytest \
--cov-report=xml:"${RAPIDS_COVERAGE_DIR}/raft-dask-coverage.xml" \
--cov-report=term \
test
-exitcode=$?
-
-if (( ${exitcode} != 0 )); then
- SUITEERROR=${exitcode}
- echo "FAILED: 1 or more tests in raft-dask"
-fi
popd
-exit ${SUITEERROR}
+rapids-logger "Test script exiting with value: $EXITCODE"
+exit ${EXITCODE}
diff --git a/conda/recipes/libraft/meta.yaml b/conda/recipes/libraft/meta.yaml
index a189068e00..b84f979572 100644
--- a/conda/recipes/libraft/meta.yaml
+++ b/conda/recipes/libraft/meta.yaml
@@ -37,6 +37,7 @@ outputs:
string: cuda{{ cuda_major }}_{{ date_string }}_{{ GIT_DESCRIBE_HASH }}_{{ GIT_DESCRIBE_NUMBER }}
ignore_run_exports_from:
- {{ compiler('cuda') }}
+ - librmm
requirements:
build:
- {{ compiler('c') }}
diff --git a/conda/recipes/pylibraft/meta.yaml b/conda/recipes/pylibraft/meta.yaml
index f01afd2add..4a9b98ac75 100644
--- a/conda/recipes/pylibraft/meta.yaml
+++ b/conda/recipes/pylibraft/meta.yaml
@@ -47,12 +47,11 @@ requirements:
- libraft-headers {{ version }}
- python x.x
-# TODO: Remove the linux64 tags on tests after disabling gpuCI / Jenkins
-tests: # [linux64]
- requirements: # [linux64]
- - cudatoolkit ={{ cuda_version }} # [linux64]
- imports: # [linux64]
- - pylibraft # [linux64]
+tests:
+ requirements:
+ - cudatoolkit ={{ cuda_version }}
+ imports:
+ - pylibraft
about:
home: https://rapids.ai/
diff --git a/conda/recipes/raft-dask/meta.yaml b/conda/recipes/raft-dask/meta.yaml
index daab2fa2fd..7f00ab4db1 100644
--- a/conda/recipes/raft-dask/meta.yaml
+++ b/conda/recipes/raft-dask/meta.yaml
@@ -58,12 +58,11 @@ requirements:
- ucx-proc=*=gpu
- ucx-py {{ ucx_py_version }}
-# TODO: Remove the linux64 tags on tests after disabling gpuCI / Jenkins
-tests: # [linux64]
- requirements: # [linux64]
- - cudatoolkit ={{ cuda_version }} # [linux64]
- imports: # [linux64]
- - raft_dask # [linux64]
+tests:
+ requirements:
+ - cudatoolkit ={{ cuda_version }}
+ imports:
+ - raft_dask
about:
home: https://rapids.ai/
diff --git a/cpp/include/raft/neighbors/specializations/detail/ivf_pq_search.cuh b/cpp/include/raft/neighbors/specializations/detail/ivf_pq_search.cuh
index 9e331c3f47..6eb8a2fc65 100644
--- a/cpp/include/raft/neighbors/specializations/detail/ivf_pq_search.cuh
+++ b/cpp/include/raft/neighbors/specializations/detail/ivf_pq_search.cuh
@@ -32,14 +32,9 @@ using fp8u_t = fp_8bit<5, false>;
extern template struct ivfpq_compute_similarity::configured; \
extern template struct ivfpq_compute_similarity::configured;
-#define RAFT_INST_ALL_IDX_T(OutT, LutT) \
- RAFT_INST(uint64_t, OutT, LutT) \
- RAFT_INST(int64_t, OutT, LutT) \
- RAFT_INST(uint32_t, OutT, LutT)
-
#define RAFT_INST_ALL_OUT_T(LutT) \
- RAFT_INST_ALL_IDX_T(float, LutT) \
- RAFT_INST_ALL_IDX_T(half, LutT)
+ RAFT_INST(uint64_t, float, LutT) \
+ RAFT_INST(uint64_t, half, LutT)
RAFT_INST_ALL_OUT_T(float)
RAFT_INST_ALL_OUT_T(half)
@@ -47,7 +42,6 @@ RAFT_INST_ALL_OUT_T(fp8s_t)
RAFT_INST_ALL_OUT_T(fp8u_t)
#undef RAFT_INST
-#undef RAFT_INST_ALL_IDX_T
#undef RAFT_INST_ALL_OUT_T
#define RAFT_INST(T, IdxT) \