Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add raft::stats::neighborhood_recall #1860

Merged
merged 17 commits into from
Oct 9, 2023

Conversation

divyegala
Copy link
Member

This PR adds a primitive to compute recall score for Nearest Neighbor Algorithms

@divyegala divyegala added feature request New feature or request non-breaking Non-breaking change labels Sep 27, 2023
@divyegala divyegala self-assigned this Sep 27, 2023
@divyegala divyegala requested a review from a team as a code owner September 27, 2023 19:51
@github-actions github-actions bot added the cpp label Sep 27, 2023
@divyegala divyegala requested a review from a team as a code owner September 27, 2023 21:36
@github-actions github-actions bot added the CMake label Sep 27, 2023
@cjnolet cjnolet changed the title Add raft::stats::recall Add raft::stats::neighbors_recall Oct 2, 2023
@cjnolet cjnolet changed the title Add raft::stats::neighbors_recall Add raft::stats::neighborhood_recall Oct 2, 2023
@divyegala divyegala requested review from a team as code owners October 4, 2023 16:31
@divyegala divyegala changed the base branch from branch-23.10 to branch-23.12 October 4, 2023 16:31
@divyegala divyegala requested review from lowener and cjnolet October 4, 2023 16:47
Copy link
Contributor

@lowener lowener left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

benfred and others added 7 commits October 6, 2023 08:28
Building on cuda 12.2 shows errors like

```
/code/raft/cpp/include/raft/spatial/knn/detail/ball_cover/registers-inl.cuh(177): error #20054-D: dynamic initialization is not supported for a function-scope static __shared__ variable within a __device__/__global__ function
              KeyValuePair<value_t, value_idx> shared_memV[kNumWarps * warp_q];
```

Fix by using default constructors for structures in shared memory, even trivial constructors will cause this issue

Authors:
  - Ben Frederickson (https://github.com/benfred)
  - Corey J. Nolet (https://github.com/cjnolet)

Approvers:
  - Corey J. Nolet (https://github.com/cjnolet)

URL: rapidsai#1870
I got errors when compiling a program using raft NN-descent. This PR fixes the bug.

## Error
e.g.
```
/home/.../include/raft/neighbors/detail/nn_descent.cuh(1158): error: invalid narrowing conversion from "unsigned long" to "int"
      h_rev_graph_old_{static_cast<size_t>(nrow_ * NUM_SAMPLES)},
```

- nvcc

```
Built on Tue_Aug_15_22:02:13_PDT_2023
Cuda compilation tools, release 12.2, V12.2.140
Build cuda_12.2.r12.2/compiler.33191640_0
```

- gcc
```
gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
```

- thrust : 2.2 (This is the cause of this error [[detail](rapidsai#1869 (comment))])

# Change
Use `Class(...)` instead of `Class{...}`.

# Cause
The NN-descent code calls constructors of `thrust::host_vector` as shown below:
```cpp
graph_host_buffer_{static_cast<size_t>(nrow_ * DEGREE_ON_DEVICE)},
```
However, this constructor is regarded as a list initialization.
This is the same as the following code outputting 1 instead of 2.
```cpp
#include <iostream>
#include <vector>

int main() {
  std::vector<float> list{2};

  std::cout << list.size() << std::endl;
}
```

[detail](https://en.cppreference.com/w/cpp/language/list_initialization)

Authors:
  - tsuki (https://github.com/enp1s0)
  - Corey J. Nolet (https://github.com/cjnolet)

Approvers:
  - Corey J. Nolet (https://github.com/cjnolet)
  - Divye Gala (https://github.com/divyegala)

URL: rapidsai#1869
NN-Descent was using `int` type for indexing in `mdarray`, however this was causing an overflow when the product of all extents was greater than `int`.

This PR also adds/fixes:

- Missing dependencies for `raft-ann-bench` development environment
- Exposes NN Descent iterations to use in CAGRA benchmarks

Authors:
  - Divye Gala (https://github.com/divyegala)

Approvers:
  - Corey J. Nolet (https://github.com/cjnolet)
  - Ray Douglass (https://github.com/raydouglass)

URL: rapidsai#1875
)

# Purpose
This PR provides a utility for copying between generic mdspans. This includes between host and device, between mdspans of different layouts, and between mdspans of different (convertible) data types

## API
`raft::copy(raft_resources, dest_mdspan, src_mdspan);`

# Limitations

- Currently does not support copies between mdspans on two different GPUs
- Currently not performant for generic host-to-host copies (would be much easier to optimize with submdspan for padded layouts)
- Submdspan with padded layouts would also make it easier to improve perf of some device-to-device copies, though perf should already be quite good for most device-to-device copies.

# Design

- Includes optional `RAFT_DISABLE_CUDA` build definition in order to use this utility in CUDA-free builds (important for use in the FIL backend for Triton)
- Includes a new `raft::stream_view` object which is a thin wrapper around `rmm::stream_view`. Its purpose is solely to provide a symbol that will be defined in CUDA-free builds and which will throw exceptions or log error messages if someone tries to use a CUDA stream in a CUDA-free build. This avoids a whole bunch of ifdefs that would otherwise infect the whole codebase.
- Uses (roughly in order of preference): `cudaMemcpyAsync, std::copy, cublas, custom device kernel, custom host-to-host transfer logic` for the underlying copy
- Provides two different headers: `raft/core/copy.hpp` and `raft/core/copy.cuh`. This is to accommodate the custom kernel necessary for handling completely generic device-to-device copies. See below for more details.

## Details on the header split
For many instantiations, even those which involve the device, we do not require nvcc compilation. If, however, we determine at compilation time that we must use a custom kernel for the copy, then we must invoke nvcc. We do not wish to indicate that a public header file is a C++ header when it is a CUDA header or vice versa, so we split the definitions into separate `hpp` and `cuh` files, with all template instantiations requiring the custom kernel enable-if'd out of the hpp file.

Thus, the cuh header can be used for _any_ mdspan-to-mdspan copy, but the hpp file will not compile for those specific instantiations that require a custom kernel. The recommended workflow is that if a `cpp` file requires an mdspan-to-mdspan copy, first try the `hpp` header. If that fails, the `cpp` file must be converted to a `cu` file, and the `cuh` header should be used. For source files that are already being compiled with nvcc (i.e. `.cu` files), the `cuh` header might as well be used and will not result in any additional compile time penalty.

# Remaining tasks to leave WIP status

- [x] Add benchmarks for copies
- [x] Ensure that new function is correctly added to docs

# Follow-up items

- Optimize host-to-host transfers using a cache-oblivious approach with SIMD-accelerated transposes for contiguous memory
- Test cache-oblivious device-to-device transfers and compare performance
- Provide transparent support for copies between devices.

## Relationship to mdbuffer
This utility encapsulates a substantial chunk of the core logic required for the mdbuffer implementation. It is being split into its own PR both because it is useful on its own and because the mdbuffer work has been delayed by higher priority tasks.

Close rapidsai#1779

Authors:
  - William Hicks (https://github.com/wphicks)
  - Tarang Jain (https://github.com/tarang-jain)
  - Corey J. Nolet (https://github.com/cjnolet)

Approvers:
  - Divye Gala (https://github.com/divyegala)

URL: rapidsai#1818
@divyegala
Copy link
Member Author

/merge

@rapids-bot rapids-bot bot merged commit 518ed6c into rapidsai:branch-23.12 Oct 9, 2023
59 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CMake cpp feature request New feature or request non-breaking Non-breaking change
Projects
Development

Successfully merging this pull request may close these issues.

6 participants