Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify bench/ann scripts to Python based module #1642

Merged
merged 33 commits into from
Jul 26, 2023
Merged
Show file tree
Hide file tree
Changes from 29 commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
d6b5f4e
add utility to download and move files
divyegala Jul 11, 2023
0c6c33b
run pre-commit
divyegala Jul 11, 2023
1be4eb1
add copyright
divyegala Jul 11, 2023
a50cd97
start working on runner script
divyegala Jul 11, 2023
e9b4eca
working build and search
divyegala Jul 13, 2023
87ad2b3
Merge remote-tracking branch 'upstream/branch-23.08' into bench-ann-s…
divyegala Jul 13, 2023
3a12d40
fix spelling
divyegala Jul 13, 2023
eb6d9a2
run flake8 manually
divyegala Jul 13, 2023
cd86fa3
add data_export.py script
divyegala Jul 13, 2023
a548b22
run flake8 manually
divyegala Jul 13, 2023
df16559
Update cpp/bench/ann/scripts/run.py
divyegala Jul 14, 2023
d4f30de
review suggestions
divyegala Jul 14, 2023
b665a64
add docs
divyegala Jul 14, 2023
079d8ef
spelling check
divyegala Jul 14, 2023
c62c423
address review
divyegala Jul 14, 2023
746c214
Merge remote-tracking branch 'upstream/branch-23.08' into bench-ann-s…
divyegala Jul 14, 2023
1430155
add faiss_gpu_ivf_sq
divyegala Jul 18, 2023
a38f21c
address review to use new string formatting, add plot.py
divyegala Jul 19, 2023
94ddec4
add end-to-end docs for b scale
divyegala Jul 19, 2023
cf44279
add plotting
divyegala Jul 19, 2023
7b4711e
correct executable=>algo strategy
divyegala Jul 19, 2023
7465b8d
address review
divyegala Jul 20, 2023
9b978c9
Merge branch 'branch-23.08' into bench-ann-scripts
cjnolet Jul 20, 2023
76d45fd
modify docs
divyegala Jul 20, 2023
494609e
fix some typos
divyegala Jul 20, 2023
d46d49d
run benchmarks with conda package
divyegala Jul 20, 2023
5adbf36
fix spelling
divyegala Jul 21, 2023
2f1e8ca
add build/search params to run.py
divyegala Jul 21, 2023
3ac8d76
add destructors to fix running raft benchmarks
divyegala Jul 21, 2023
ee61877
move algos.yaml
divyegala Jul 21, 2023
dbfae90
Merge branch 'branch-23.08' into bench-ann-scripts
cjnolet Jul 24, 2023
a0bf789
address review
divyegala Jul 25, 2023
7c1a6cf
add cmake example
divyegala Jul 25, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions conda/environments/bench_ann_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ dependencies:
- libcusparse-dev=11.7.5.86
- libcusparse=11.7.5.86
- libfaiss>=1.7.1
- matplotlib
- nccl>=2.9.9
- ninja
- nlohmann_json>=3.11.2
Expand Down
30 changes: 30 additions & 0 deletions cpp/bench/ann/algos.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
faise_gpu_ivf_flat:
executable: FAISS_IVF_FLAT_ANN_BENCH
disabled: false
faiss_gpu_flat:
executable: FAISS_IVF_FLAT_ANN_BENCH
disabled: false
faiss_gpu_ivf_pq:
executable: FAISS_IVF_PQ_ANN_BENCH
disabled: false
faiss_gpu_ivf_sq:
executable: FAISS_IVF_PQ_ANN_BENCH
disabled: false
faiss_gpu_bfknn:
executable: FAISS_BFKNN_ANN_BENCH
disabled: false
raft_ivf_flat:
executable: RAFT_IVF_FLAT_ANN_BENCH
disabled: false
raft_ivf_pq:
executable: RAFT_IVF_PQ_ANN_BENCH
disabled: false
raft_cagra:
executable: RAFT_CAGRA_ANN_BENCH
disabled: false
ggnn:
executable: GGNN_ANN_BENCH
disabled: false
hnswlib:
executable: HNSWLIB_ANN_BENCH
disabled: false
6 changes: 1 addition & 5 deletions cpp/bench/ann/conf/glove-100-inner.json
Original file line number Diff line number Diff line change
Expand Up @@ -789,9 +789,5 @@

],
"search_result_file" : "result/glove-100-inner/ggnn/kbuild96-segment64-refine2-k10"
},


]

}]
}
2 changes: 2 additions & 0 deletions cpp/bench/ann/src/raft/raft_cagra_wrapper.h
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ class RaftCagra : public ANN<T> {
void save(const std::string& file) const override;
void load(const std::string&) override;

~RaftCagra() noexcept { rmm::mr::set_current_device_resource(mr_.get_upstream()); }

private:
raft::device_resources handle_;
BuildParam index_params_;
Expand Down
2 changes: 2 additions & 0 deletions cpp/bench/ann/src/raft/raft_ivf_flat_wrapper.h
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ class RaftIvfFlatGpu : public ANN<T> {
void save(const std::string& file) const override;
void load(const std::string&) override;

~RaftIvfFlatGpu() noexcept { rmm::mr::set_current_device_resource(mr_.get_upstream()); }

private:
raft::device_resources handle_;
BuildParam index_params_;
Expand Down
2 changes: 2 additions & 0 deletions cpp/bench/ann/src/raft/raft_ivf_pq_wrapper.h
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,8 @@ class RaftIvfPQ : public ANN<T> {
void save(const std::string& file) const override;
void load(const std::string&) override;

~RaftIvfPQ() noexcept { rmm::mr::set_current_device_resource(mr_.get_upstream()); }

private:
raft::device_resources handle_;
BuildParam index_params_;
Expand Down
1 change: 1 addition & 0 deletions dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -169,6 +169,7 @@ dependencies:
- h5py>=3.8.0
- libfaiss>=1.7.1
- faiss-proc=*=cuda
- matplotlib

cudatoolkit:
specific:
Expand Down
48 changes: 48 additions & 0 deletions docs/source/ann_benchmarks_build.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
### Dependencies

CUDA 11 and a GPU with Pascal architecture or later are required to run the benchmarks.

Please refer to the [installation docs](https://docs.rapids.ai/api/raft/stable/build.html#cuda-gpu-requirements) for the base requirements to build RAFT.

In addition to the base requirements for building RAFT, additional dependencies needed to build the ANN benchmarks include:
1. FAISS GPU >= 1.7.1
2. Google Logging (GLog)
3. H5Py
4. HNSWLib
5. nlohmann_json
6. GGNN

[rapids-cmake](https://github.com/rapidsai/rapids-cmake) is used to build the ANN benchmarks so the code for dependencies not already supplied in the CUDA toolkit will be downloaded and built automatically.

The easiest (and most reproducible) way to install the dependencies needed to build the ANN benchmarks is to use the conda environment file located in the `conda/environments` directory of the RAFT repository. The following command will use `mamba` (which is preferred over `conda`) to build and activate a new environment for compiling the benchmarks:

```bash
mamba env create --name raft_ann_benchmarks -f conda/environments/bench_ann_cuda-118_arch-x86_64.yaml
conda activate raft_ann_benchmarks
```

The above conda environment will also reduce the compile times as dependencies like FAISS will already be installed and not need to be compiled with `rapids-cmake`.

### Compiling the Benchmarks

After the needed dependencies are satisfied, the easiest way to compile ANN benchmarks is through the `build.sh` script in the root of the RAFT source code repository. The following will build the executables for all the support algorithms:
```bash
./build.sh bench-ann
```

You can limit the algorithms that are built by providing a semicolon-delimited list of executable names (each algorithm is suffixed with `_ANN_BENCH`):
```bash
./build.sh bench-ann -n --limit-bench-ann=HNSWLIB_ANN_BENCH;RAFT_IVF_PQ_ANN_BENCH
```

Available targets to use with `--limit-bench-ann` are:
- FAISS_IVF_FLAT_ANN_BENCH
- FAISS_IVF_PQ_ANN_BENCH
- FAISS_BFKNN_ANN_BENCH
- GGNN_ANN_BENCH
- HNSWLIB_ANN_BENCH
- RAFT_CAGRA_ANN_BENCH
- RAFT_IVF_PQ_ANN_BENCH
- RAFT_IVF_FLAT_ANN_BENCH

By default, the `*_ANN_BENCH` executables program infer the dataset's datatype from the filename's extension. For example, an extension of `fbin` uses a `float` datatype, `f16bin` uses a `float16` datatype, extension of `i8bin` uses `int8_t` datatype, and `u8bin` uses `uint8_t` type. Currently, only `float`, `float16`, int8_t`, and `unit8_t` are supported.
Original file line number Diff line number Diff line change
@@ -1,65 +1,4 @@
# CUDA ANN Benchmarks

This project provides a benchmark program for various ANN search implementations. It's especially suitable for comparing GPU implementations as well as comparing GPU against CPU.

## Benchmark

### Dependencies

CUDA 11 and a GPU with Pascal architecture or later are required to run the benchmarks.

Please refer to the [installation docs](https://docs.rapids.ai/api/raft/stable/build.html#cuda-gpu-requirements) for the base requirements to build RAFT.

In addition to the base requirements for building RAFT, additional dependencies needed to build the ANN benchmarks include:
1. FAISS GPU >= 1.7.1
2. Google Logging (GLog)
3. H5Py
4. HNSWLib
5. nlohmann_json
6. GGNN

[rapids-cmake](https://github.com/rapidsai/rapids-cmake) is used to build the ANN benchmarks so the code for dependencies not already supplied in the CUDA toolkit will be downloaded and built automatically.

The easiest (and most reproducible) way to install the dependencies needed to build the ANN benchmarks is to use the conda environment file located in the `conda/environments` directory of the RAFT repository. The following command will use `mamba` (which is preferred over `conda`) to build and activate a new environment for compiling the benchmarks:

```bash
mamba env create --name raft_ann_benchmarks -f conda/environments/bench_ann_cuda-118_arch-x86_64.yaml
conda activate raft_ann_benchmarks
```

The above conda environment will also reduce the compile times as dependencies like FAISS will already be installed and not need to be compiled with `rapids-cmake`.

### Compiling the Benchmarks

After the needed dependencies are satisfied, the easiest way to compile ANN benchmarks is through the `build.sh` script in the root of the RAFT source code repository. The following will build the executables for all the support algorithms:
```bash
./build.sh bench-ann
```

You can limit the algorithms that are built by providing a semicolon-delimited list of executable names (each algorithm is suffixed with `_ANN_BENCH`):
```bash
./build.sh bench-ann -n --limit-bench-ann=HNSWLIB_ANN_BENCH;RAFT_IVF_PQ_ANN_BENCH
```

Available targets to use with `--limit-bench-ann` are:
- FAISS_IVF_FLAT_ANN_BENCH
- FAISS_IVF_PQ_ANN_BENCH
- FAISS_BFKNN_ANN_BENCH
- GGNN_ANN_BENCH
- HNSWLIB_ANN_BENCH
- RAFT_CAGRA_ANN_BENCH
- RAFT_IVF_PQ_ANN_BENCH
- RAFT_IVF_FLAT_ANN_BENCH

By default, the `*_ANN_BENCH` executables program infer the dataset's datatype from the filename's extension. For example, an extension of `fbin` uses a `float` datatype, `f16bin` uses a `float16` datatype, extension of `i8bin` uses `int8_t` datatype, and `u8bin` uses `uint8_t` type. Currently, only `float`, `float16`, int8_t`, and `unit8_t` are supported.

### Usage
There are 4 general steps to running the benchmarks:
1. Prepare Dataset
2. Build Index
3. Search Using Built Index
4. Evaluate Result

### Low-level Scripts and Executables
#### End-to-end Example
An end-to-end example (run from the RAFT source code root directory):
```bash
Expand Down Expand Up @@ -99,7 +38,7 @@ popd
# optional step: plot QPS-Recall figure using data in result.csv with your favorite tool
```

##### Step 1: Prepare Dataset
##### Step 1: Prepare Dataset <a id='bash-prepare-dataset'></a>
A dataset usually has 4 binary files containing database vectors, query vectors, ground truth neighbors and their corresponding distances. For example, Glove-100 dataset has files `base.fbin` (database vectors), `query.fbin` (query vectors), `groundtruth.neighbors.ibin` (ground truth neighbors), and `groundtruth.distances.fbin` (ground truth distances). The first two files are for index building and searching, while the other two are associated with a particular distance and are used for evaluation.

The file suffixes `.fbin`, `.f16bin`, `.ibin`, `.u8bin`, and `.i8bin` denote that the data type of vectors stored in the file are `float32`, `float16`(a.k.a `half`), `int`, `uint8`, and `int8`, respectively.
Expand Down Expand Up @@ -128,7 +67,7 @@ Commonly used datasets can be downloaded from two websites:

Most datasets provided by `ann-benchmarks` use `Angular` or `Euclidean` distance. `Angular` denotes cosine distance. However, computing cosine distance reduces to computing inner product by normalizing vectors beforehand. In practice, we can always do the normalization to decrease computation cost, so it's better to measure the performance of inner product rather than cosine distance. The `-n` option of `hdf5_to_fbin.py` can be used to normalize the dataset.

2. Billion-scale datasets can be found at [`big-ann-benchmarks`](http://big-ann-benchmarks.com). The ground truth file contains both neighbors and distances, thus should be split. A script is provided for this:
2. <a id='billion-scale'></a>Billion-scale datasets can be found at [`big-ann-benchmarks`](http://big-ann-benchmarks.com). The ground truth file contains both neighbors and distances, thus should be split. A script is provided for this:
```bash
$ cpp/bench/ann/scripts/split_groundtruth.pl
usage: script/split_groundtruth.pl input output_prefix
Expand Down Expand Up @@ -237,7 +176,7 @@ usage: [-f] [-o output.csv] groundtruth.neighbors.ibin result_paths...
-f: force to recompute recall and update it in result file if needed
-o: also write result to a csv file
```
Note that there can be multiple arguments for paths of result files. Each argument can be either a file name or a path. If it's a directory, all files found under it recursively will be used as input files.
<a id='result-filepath-example'></a>Note that there can be multiple arguments for paths of result files. Each argument can be either a file name or a path. If it's a directory, all files found under it recursively will be used as input files.
An example:
```bash
cpp/bench/ann/scripts/eval.pl groundtruth.neighbors.ibin \
Expand Down Expand Up @@ -274,7 +213,7 @@ public:
};
```

The benchmark program uses JSON configuration file. To add the new algorithm to the benchmark, need be able to specify `build_param`, whose value is a JSON object, and `search_params`, whose value is an array of JSON objects, for this algorithm in configuration file. Still take the configuration for `HnswLib` as an example:
<a id='json-index-config'></a>The benchmark program uses JSON configuration file. To add the new algorithm to the benchmark, need be able to specify `build_param`, whose value is a JSON object, and `search_params`, whose value is an array of JSON objects, for this algorithm in configuration file. Still take the configuration for `HnswLib` as an example:
```json
{
"name" : "...",
Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ While not exhaustive, the following general categories help summarize the accele
developer_guide.md
cpp_api.rst
pylibraft_api.rst
cuda_ann_benchmarks.md
raft_ann_benchmarks.md
raft_dask_api.rst
using_comms.rst
using_libraft.md
Expand Down
Loading