Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add YAML config files to run parameter sweeps for ANN benchmarks #1929

Merged
merged 34 commits into from
Oct 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
f53f128
Adding some initial yaml param conf files
cjnolet Oct 20, 2023
0385543
Adding name
cjnolet Oct 20, 2023
1bf6fa7
Adding datasets.yml
cjnolet Oct 20, 2023
9e52c11
More cleanup
cjnolet Oct 20, 2023
14e2c5d
add cross products for yaml->json configs
divyegala Oct 24, 2023
b00942d
add algo-groups
divyegala Oct 25, 2023
1f81d45
Updated validatoes
cjnolet Oct 25, 2023
bcf92c5
remove only cagra yaml load
divyegala Oct 26, 2023
933732c
fix configs
divyegala Oct 26, 2023
32430c1
working yaml param sweeps
divyegala Oct 27, 2023
b65e6d3
add docs
divyegala Oct 28, 2023
1a810fd
add bench-ann cuda12 envs
divyegala Oct 28, 2023
e0e3ac1
merge upstream
divyegala Oct 28, 2023
a2e9b98
style fixes
divyegala Oct 28, 2023
fa193b8
fix filename
divyegala Oct 28, 2023
3b0a956
correct filename again
divyegala Oct 28, 2023
4345124
remove debug print
divyegala Oct 28, 2023
834f567
try to fix bad merge
divyegala Oct 28, 2023
4d764f0
remove comment
divyegala Oct 28, 2023
c7f9af3
Merge remote-tracking branch 'upstream/branch-23.12' into fea-2312-be…
divyegala Oct 30, 2023
f644dfe
address review comments
divyegala Oct 30, 2023
f470d86
add wiki datasets to datasets.yaml
divyegala Oct 30, 2023
2b21ad0
fix style
divyegala Oct 30, 2023
9ff0fbb
more style fixes
divyegala Oct 30, 2023
cf086c2
fix when --configuration is a file
divyegala Oct 31, 2023
0b6cc09
don't read json confs
divyegala Oct 31, 2023
227486c
add nvtx dependency
divyegala Oct 31, 2023
d80af8b
fix docs again
divyegala Oct 31, 2023
33349ce
again fix docs
divyegala Oct 31, 2023
879c744
fix style
divyegala Oct 31, 2023
d2ccf2a
fix wike datasets config
divyegala Oct 31, 2023
124d091
Changing FAISS M to M_ratio. Adding build validator for ivf-pq. Addin…
cjnolet Oct 31, 2023
683cebe
Merge branch 'fea-2312-bench-ann-conf' of github.com:divyegala/raft i…
cjnolet Oct 31, 2023
5538b05
More work on configs and validators
cjnolet Oct 31, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions conda/environments/all_cuda-118_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dependencies:
- clang-tools=16.0.6
- clang==16.0.6
- cmake>=3.26.4
- cuda-nvtx=11.8
- cuda-profiler-api=11.8.86
- cuda-python>=11.7.1,<12.0a0
- cuda-version=11.8
Expand Down
1 change: 1 addition & 0 deletions conda/environments/all_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dependencies:
- clang-tools=16.0.6
- clang==16.0.6
- cmake>=3.26.4
- cuda-nvtx=11.8
- cuda-profiler-api=11.8.86
- cuda-python>=11.7.1,<12.0a0
- cuda-version=11.8
Expand Down
1 change: 1 addition & 0 deletions conda/environments/all_cuda-120_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ dependencies:
- cmake>=3.26.4
- cuda-cudart-dev
- cuda-nvcc
- cuda-nvtx-dev
- cuda-profiler-api
- cuda-python>=12.0,<13.0a0
- cuda-version=12.0
Expand Down
1 change: 1 addition & 0 deletions conda/environments/all_cuda-120_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ dependencies:
- cmake>=3.26.4
- cuda-cudart-dev
- cuda-nvcc
- cuda-nvtx-dev
- cuda-profiler-api
- cuda-python>=12.0,<13.0a0
- cuda-version=12.0
Expand Down
1 change: 1 addition & 0 deletions conda/environments/bench_ann_cuda-118_arch-aarch64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dependencies:
- clang-tools=16.0.6
- clang==16.0.6
- cmake>=3.26.4
- cuda-nvtx=11.8
- cuda-profiler-api=11.8.86
- cuda-version=11.8
- cudatoolkit
Expand Down
1 change: 1 addition & 0 deletions conda/environments/bench_ann_cuda-118_arch-x86_64.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ dependencies:
- clang-tools=16.0.6
- clang==16.0.6
- cmake>=3.26.4
- cuda-nvtx=11.8
- cuda-profiler-api=11.8.86
- cuda-version=11.8
- cudatoolkit
Expand Down
40 changes: 40 additions & 0 deletions conda/environments/bench_ann_cuda-120_arch-aarch64.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# This file is generated by `rapids-dependency-file-generator`.
# To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
channels:
- rapidsai
- rapidsai-nightly
- dask/label/dev
- conda-forge
- nvidia
dependencies:
- benchmark>=1.8.2
- c-compiler
- clang-tools=16.0.6
- clang==16.0.6
- cmake>=3.26.4
- cuda-cudart-dev
- cuda-nvcc
- cuda-nvtx-dev
- cuda-profiler-api
- cuda-version=12.0
- cxx-compiler
- cython>=3.0.0
- gcc_linux-aarch64=11.*
- glog>=0.6.0
- h5py>=3.8.0
- hnswlib=0.7.0
- libcublas-dev
- libcurand-dev
- libcusolver-dev
- libcusparse-dev
- matplotlib
- nccl>=2.9.9
- ninja
- nlohmann_json>=3.11.2
- openblas
- pandas
- pyyaml
- rmm==23.12.*
- scikit-build>=0.13.1
- sysroot_linux-aarch64==2.17
name: bench_ann_cuda-120_arch-aarch64
40 changes: 40 additions & 0 deletions conda/environments/bench_ann_cuda-120_arch-x86_64.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# This file is generated by `rapids-dependency-file-generator`.
# To make changes, edit ../../dependencies.yaml and run `rapids-dependency-file-generator`.
channels:
- rapidsai
- rapidsai-nightly
- dask/label/dev
- conda-forge
- nvidia
dependencies:
- benchmark>=1.8.2
- c-compiler
- clang-tools=16.0.6
- clang==16.0.6
- cmake>=3.26.4
- cuda-cudart-dev
- cuda-nvcc
- cuda-nvtx-dev
- cuda-profiler-api
- cuda-version=12.0
- cxx-compiler
- cython>=3.0.0
- gcc_linux-64=11.*
- glog>=0.6.0
- h5py>=3.8.0
- hnswlib=0.7.0
- libcublas-dev
- libcurand-dev
- libcusolver-dev
- libcusparse-dev
- matplotlib
- nccl>=2.9.9
- ninja
- nlohmann_json>=3.11.2
- openblas
- pandas
- pyyaml
- rmm==23.12.*
- scikit-build>=0.13.1
- sysroot_linux-64==2.17
name: bench_ann_cuda-120_arch-x86_64
2 changes: 1 addition & 1 deletion cpp/bench/ann/src/faiss/faiss_cpu_benchmark.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ void parse_build_param(const nlohmann::json& conf,
typename raft::bench::ann::FaissCpuIVFPQ<T>::BuildParam& param)
{
parse_base_build_param<T>(conf, param);
param.M = conf.at("M");
param.M_ratio = conf.at("M_ratio");
if (conf.contains("usePrecomputed")) {
param.usePrecomputed = conf.at("usePrecomputed");
} else {
Expand Down
10 changes: 7 additions & 3 deletions cpp/bench/ann/src/faiss/faiss_cpu_wrapper.h
Original file line number Diff line number Diff line change
Expand Up @@ -229,16 +229,20 @@ template <typename T>
class FaissCpuIVFPQ : public FaissCpu<T> {
public:
struct BuildParam : public FaissCpu<T>::BuildParam {
int M;
int M_ratio;
int bitsPerCode;
bool usePrecomputed;
};

FaissCpuIVFPQ(Metric metric, int dim, const BuildParam& param) : FaissCpu<T>(metric, dim, param)
{
this->init_quantizer(dim);
this->index_ = std::make_unique<faiss::IndexIVFPQ>(
this->quantizer_.get(), dim, param.nlist, param.M, param.bitsPerCode, this->metric_type_);
this->index_ = std::make_unique<faiss::IndexIVFPQ>(this->quantizer_.get(),
dim,
param.nlist,
dim / param.M_ratio,
param.bitsPerCode,
this->metric_type_);
}

void save(const std::string& file) const override
Expand Down
2 changes: 1 addition & 1 deletion cpp/bench/ann/src/faiss/faiss_gpu_benchmark.cu
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ void parse_build_param(const nlohmann::json& conf,
typename raft::bench::ann::FaissGpuIVFPQ<T>::BuildParam& param)
{
parse_base_build_param<T>(conf, param);
param.M = conf.at("M");
param.M_ratio = conf.at("M_ratio");
if (conf.contains("usePrecomputed")) {
param.usePrecomputed = conf.at("usePrecomputed");
} else {
Expand Down
5 changes: 3 additions & 2 deletions cpp/bench/ann/src/faiss/faiss_gpu_wrapper.h
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ template <typename T>
class FaissGpuIVFPQ : public FaissGpu<T> {
public:
struct BuildParam : public FaissGpu<T>::BuildParam {
int M;
int M_ratio;
bool useFloat16;
bool usePrecomputed;
};
Expand All @@ -274,11 +274,12 @@ class FaissGpuIVFPQ : public FaissGpu<T> {
config.useFloat16LookupTables = param.useFloat16;
config.usePrecomputedTables = param.usePrecomputed;
config.device = this->device_;

this->index_ =
std::make_unique<faiss::gpu::GpuIndexIVFPQ>(&(this->gpu_resource_),
dim,
param.nlist,
param.M,
dim / param.M_ratio,
8, // FAISS only supports bitsPerCode=8
this->metric_type_,
config);
Expand Down
10 changes: 1 addition & 9 deletions cpp/bench/ann/src/raft/raft_benchmark.cu
Original file line number Diff line number Diff line change
Expand Up @@ -272,13 +272,5 @@ REGISTER_ALGO_INSTANCE(std::uint8_t);

#ifdef ANN_BENCH_BUILD_MAIN
#include "../common/benchmark.hpp"
int main(int argc, char** argv)
{
rmm::mr::cuda_memory_resource cuda_mr;
// Construct a resource that uses a coalescing best-fit pool allocator
rmm::mr::pool_memory_resource<rmm::mr::cuda_memory_resource> pool_mr{&cuda_mr};
rmm::mr::set_current_device_resource(
&pool_mr); // Updates the current device resource pointer to `pool_mr`
return raft::bench::ann::run_main(argc, argv);
}
int main(int argc, char** argv) { return raft::bench::ann::run_main(argc, argv); }
#endif
7 changes: 6 additions & 1 deletion dependencies.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ files:
bench_ann:
output: conda
matrix:
cuda: ["11.8"]
cuda: ["11.8", "12.0"]
arch: [x86_64, aarch64]
includes:
- build
Expand Down Expand Up @@ -246,6 +246,7 @@ dependencies:
cuda: "12.0"
packages:
- cuda-version=12.0
- cuda-nvtx-dev
- cuda-cudart-dev
- cuda-profiler-api
- libcublas-dev
Expand All @@ -257,6 +258,7 @@ dependencies:
packages:
- cuda-version=11.8
- cudatoolkit
- cuda-nvtx=11.8
- cuda-profiler-api=11.8.86
- libcublas-dev=11.11.3.6
- libcublas=11.11.3.6
Expand All @@ -271,6 +273,7 @@ dependencies:
packages:
- cuda-version=11.5
- cudatoolkit
- cuda-nvtx=11.5
- cuda-profiler-api>=11.4.240,<=11.8.86 # use any `11.x` version since pkg is missing several CUDA/arch packages
- libcublas-dev>=11.7.3.1,<=11.7.4.6
- libcublas>=11.7.3.1,<=11.7.4.6
Expand All @@ -285,6 +288,7 @@ dependencies:
packages:
- cuda-version=11.4
- cudatoolkit
- &cudanvtx114 cuda-nvtx=11.4
- cuda-profiler-api>=11.4.240,<=11.8.86 # use any `11.x` version since pkg is missing several CUDA/arch packages
- &libcublas_dev114 libcublas-dev>=11.5.2.43,<=11.6.5.2
- &libcublas114 libcublas>=11.5.2.43,<=11.6.5.2
Expand All @@ -299,6 +303,7 @@ dependencies:
packages:
- cuda-version=11.2
- cudatoolkit
- *cudanvtx114
- cuda-profiler-api>=11.4.240,<=11.8.86 # use any `11.x` version since pkg is missing several CUDA/arch packages
# The NVIDIA channel doesn't publish pkgs older than 11.4 for these libs,
# so 11.2 uses 11.4 packages (the oldest available).
Expand Down
Loading