-
Notifications
You must be signed in to change notification settings - Fork 202
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
WIP: Correct signatures for torch allocator plug in
Since pytorch/pytorch#91398, the signature of the pluggable allocate and deallocate functions must accept the device id. The current version only accepts a device id for allocate, which means that when using a stream ordered allocator with devices other than device zero, we pass an invalid stream into the deallocation function. To fix this, adapt the signature to match the one pytorch expects. Now, since we have the device available during allocation and deallocation, we would like to use that device to obtain the appropriate memory resource. Unfortunately, since RMM's cuda_device_id does not have a nullary constructor, we can't use it in Cython without some hacky workarounds. However, since we don't actually need to build a Python module, but rather just a single shared library that offers two extern "C" functions, let's just write our allocator hooks directly in C++. - Closes #1405
- Loading branch information
Showing
4 changed files
with
60 additions
and
31 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,56 @@ | ||
/* | ||
* Copyright (c) 2023, NVIDIA CORPORATION. | ||
* | ||
* Licensed under the Apache License, Version 2.0 (the "License"); | ||
* you may not use this file except in compliance with the License. | ||
* You may obtain a copy of the License at | ||
* | ||
* http://www.apache.org/licenses/LICENSE-2.0 | ||
* | ||
* Unless required by applicable law or agreed to in writing, software | ||
* distributed under the License is distributed on an "AS IS" BASIS, | ||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
* See the License for the specific language governing permissions and | ||
* limitations under the License. | ||
*/ | ||
|
||
#include <cuda_runtime_api.h> | ||
|
||
#include <rmm/cuda_device.hpp> | ||
#include <rmm/cuda_stream_view.hpp> | ||
#include <rmm/mr/device/per_device_resource.hpp> | ||
|
||
// These signatures must match those required by CUDAPluggableAllocator in | ||
// github.com/pytorch/pytorch/blob/main/torch/csrc/cuda/CUDAPluggableAllocator.h | ||
// Since the loading is done at runtime via dlopen, no error checking | ||
// can before performed. | ||
|
||
/** | ||
* @brief Allocate memory of at least \p size bytes. | ||
* | ||
* @throws rmm::bad_alloc When the requested allocation cannot be satisfied. | ||
* | ||
* @param size The number of bytes to allocate | ||
* @param device The device whose memory resource one should use | ||
* @param stream CUDA stream to perform allocation on | ||
* @return void* Pointer to the newly allocated memory | ||
*/ | ||
extern "C" void* allocate(std::size_t size, int device, void* stream) | ||
{ | ||
auto mr = rmm::mr::get_per_device_resource(rmm::cuda_device_id{device}); | ||
return mr->allocate(size, rmm::cuda_stream_view{static_cast<cudaStream_t>(stream)}); | ||
} | ||
|
||
/** | ||
* @brief Deallocate memory pointed to by \p ptr. | ||
* | ||
* @param ptr Pointer to be deallocated | ||
* @param size The number of bytes in the allocation | ||
* @param device The device whose memory resource one should use | ||
* @param stream CUDA stream to perform deallocation on | ||
*/ | ||
extern "C" void deallocate(void* ptr, std::size_t size, int device, void* stream) | ||
{ | ||
auto mr = rmm::mr::get_per_device_resource(rmm::cuda_device_id{device}); | ||
mr->deallocate(ptr, size, rmm::cuda_stream_view{static_cast<cudaStream_t>(stream)}); | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters