Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove deprecated functionality #1537

Merged
merged 17 commits into from
Apr 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion conda/recipes/librmm/meta.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,6 @@ outputs:
- spdlog {{ spdlog_version }}
test:
commands:
- test -f $PREFIX/include/rmm/thrust_rmm_allocator.h
- test -f $PREFIX/include/rmm/logger.hpp
- test -f $PREFIX/include/rmm/cuda_stream.hpp
- test -f $PREFIX/include/rmm/cuda_stream_view.hpp
Expand Down
81 changes: 0 additions & 81 deletions include/rmm/detail/aligned.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -26,87 +26,6 @@

namespace rmm::detail {

/**
* @brief Default alignment used for host memory allocated by RMM.
*
*/
[[deprecated("Use rmm::RMM_DEFAULT_HOST_ALIGNMENT instead.")]] static constexpr std::size_t
RMM_DEFAULT_HOST_ALIGNMENT{rmm::RMM_DEFAULT_HOST_ALIGNMENT};

/**
* @brief Default alignment used for CUDA memory allocation.
*
*/
[[deprecated("Use rmm::CUDA_ALLOCATION_ALIGNMENT instead.")]] static constexpr std::size_t
CUDA_ALLOCATION_ALIGNMENT{rmm::CUDA_ALLOCATION_ALIGNMENT};

/**
* @brief Returns whether or not `n` is a power of 2.
*
*/
[[deprecated("Use rmm::is_pow2 instead.")]] constexpr bool is_pow2(std::size_t value) noexcept
{
return rmm::is_pow2(value);
}

/**
* @brief Returns whether or not `alignment` is a valid memory alignment.
*
*/
[[deprecated("Use rmm::is_supported_alignment instead.")]] constexpr bool is_supported_alignment(
std::size_t alignment) noexcept
{
return rmm::is_pow2(alignment);
}

/**
* @brief Align up to nearest multiple of specified power of 2
*
* @param[in] value value to align
* @param[in] alignment amount, in bytes, must be a power of 2
*
* @return Return the aligned value, as one would expect
*/
[[deprecated("Use rmm::align_up instead.")]] constexpr std::size_t align_up(
std::size_t value, std::size_t alignment) noexcept
{
return rmm::align_up(value, alignment);
}

/**
* @brief Align down to the nearest multiple of specified power of 2
*
* @param[in] value value to align
* @param[in] alignment amount, in bytes, must be a power of 2
*
* @return Return the aligned value, as one would expect
*/
[[deprecated("Use rmm::align_down instead.")]] constexpr std::size_t align_down(
std::size_t value, std::size_t alignment) noexcept
{
return rmm::align_down(value, alignment);
}

/**
* @brief Checks whether a value is aligned to a multiple of a specified power of 2
*
* @param[in] value value to check for alignment
* @param[in] alignment amount, in bytes, must be a power of 2
*
* @return true if aligned
*/
[[deprecated("Use rmm::is_aligned instead.")]] constexpr bool is_aligned(
std::size_t value, std::size_t alignment) noexcept
{
return rmm::is_aligned(value, alignment);
}

[[deprecated("Use rmm::is_pointer_aligned instead.")]] inline bool is_pointer_aligned(
void* ptr, std::size_t alignment = rmm::CUDA_ALLOCATION_ALIGNMENT)
{
return rmm::is_pointer_aligned(ptr, alignment);
}

/**
* @brief Allocates sufficient host-accessible memory to satisfy the requested size `bytes` with
* alignment `alignment` using the unary callable `alloc` to allocate memory.
Expand Down
170 changes: 0 additions & 170 deletions include/rmm/mr/device/pool_memory_resource.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -111,147 +111,6 @@ class pool_memory_resource final
friend class detail::stream_ordered_memory_resource<pool_memory_resource<Upstream>,
detail::coalescing_free_list>;

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory
* pool using `upstream_mr`.
*
* @deprecated Use the constructor that takes an explicit initial pool size instead.
*
* @throws rmm::logic_error if `upstream_mr == nullptr`
* @throws rmm::logic_error if `initial_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
* @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
*
* @param upstream_mr The memory_resource from which to allocate blocks for the pool.
* @param initial_pool_size Minimum size, in bytes, of the initial pool. Defaults to zero.
* @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all
* of the available memory from the upstream resource.
*/
template <class Optional,
cuda::std::enable_if_t<cuda::std::is_same_v<cuda::std::remove_cvref_t<Optional>,
thrust::optional<std::size_t>>,
int> = 0>
[[deprecated(
"Must specify initial_pool_size and use std::optional instead of thrust::optional.")]] //
explicit pool_memory_resource(Upstream* upstream_mr,
Optional initial_pool_size,
Optional maximum_pool_size = thrust::nullopt)
: pool_memory_resource(
upstream_mr, initial_pool_size.value_or(0), maximum_pool_size.value_or(std::nullopt))
{
}

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory
* pool using `upstream_mr`.
*
* @deprecated Use the constructor that takes an explicit initial pool size instead.
*
* @throws rmm::logic_error if `upstream_mr == nullptr`
* @throws rmm::logic_error if `initial_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
* @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
*
* @param upstream_mr The memory_resource from which to allocate blocks for the pool.
* @param initial_pool_size Minimum size, in bytes, of the initial pool. Defaults to zero.
* @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all
* of the available memory from the upstream resource.
*/
[[deprecated("Must specify initial_pool_size")]] //
explicit pool_memory_resource(Upstream* upstream_mr,
std::optional<std::size_t> initial_pool_size = std::nullopt,
std::optional<std::size_t> maximum_pool_size = std::nullopt)
: pool_memory_resource(upstream_mr, initial_pool_size.value_or(0), maximum_pool_size)
{
}

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using
* `upstream_mr`.
*
* @deprecated Use the constructor that takes an explicit initial pool size instead.
*
* @throws rmm::logic_error if `upstream_mr == nullptr`
* @throws rmm::logic_error if `initial_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
* @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
*
* @param upstream_mr The memory_resource from which to allocate blocks for the pool.
* @param initial_pool_size Minimum size, in bytes, of the initial pool. Defaults to zero.
* @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all
* of the available memory from the upstream resource.
*/
template <class Optional,
cuda::std::enable_if_t<cuda::std::is_same_v<cuda::std::remove_cvref_t<Optional>,
thrust::optional<std::size_t>>,
int> = 0>
[[deprecated(
"Must specify initial_pool_size and use std::optional instead of thrust::optional.")]] //
explicit pool_memory_resource(Upstream& upstream_mr,
Optional initial_pool_size,
Optional maximum_pool_size = thrust::nullopt)
: pool_memory_resource(
upstream_mr, initial_pool_size.value_or(0), maximum_pool_size.value_or(std::nullopt))
{
}

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using
* `upstream_mr`.
*
* @deprecated Use the constructor that takes an explicit initial pool size instead.
*
* @throws rmm::logic_error if `upstream_mr == nullptr`
* @throws rmm::logic_error if `initial_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
* @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
*
* @param upstream_mr The memory_resource from which to allocate blocks for the pool.
* @param initial_pool_size Minimum size, in bytes, of the initial pool. Defaults to zero.
* @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all
* of the available memory from the upstream resource.
*/
template <typename Upstream2 = Upstream,
cuda::std::enable_if_t<cuda::mr::async_resource<Upstream2>, int> = 0>
[[deprecated("Must specify initial_pool_size")]] //
explicit pool_memory_resource(Upstream2& upstream_mr,
std::optional<std::size_t> initial_pool_size = std::nullopt,
std::optional<std::size_t> maximum_pool_size = std::nullopt)
: pool_memory_resource(upstream_mr, initial_pool_size.value_or(0), maximum_pool_size)
{
}

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using
* `upstream_mr`.
*
* @throws rmm::logic_error if `upstream_mr == nullptr`
* @throws rmm::logic_error if `initial_pool_size` is not aligned to a multiple of
* pool_memory_resource::allocation_alignment bytes.
* @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
*
* @param upstream_mr The memory_resource from which to allocate blocks for the pool.
* @param initial_pool_size Minimum size, in bytes, of the initial pool.
* @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all
* of the available from the upstream resource.
*/
template <class Optional,
cuda::std::enable_if_t<cuda::std::is_same_v<cuda::std::remove_cvref_t<Optional>,
thrust::optional<std::size_t>>,
int> = 0>
[[deprecated("Use std::optional instead of thrust::optional.")]] //
explicit pool_memory_resource(Upstream* upstream_mr,
std::size_t initial_pool_size,
Optional maximum_pool_size)
: pool_memory_resource(upstream_mr, initial_pool_size, maximum_pool_size.value_or(std::nullopt))
{
}

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using
* `upstream_mr`.
Expand Down Expand Up @@ -283,35 +142,6 @@ class pool_memory_resource final
initialize_pool(initial_pool_size, maximum_pool_size);
}

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using
* `upstream_mr`.
*
* @throws rmm::logic_error if `upstream_mr == nullptr`
* @throws rmm::logic_error if `initial_pool_size` is not aligned to a multiple of
* pool_memory_resource::allocation_alignment bytes.
* @throws rmm::logic_error if `maximum_pool_size` is neither the default nor aligned to a
* multiple of pool_memory_resource::allocation_alignment bytes.
*
* @param upstream_mr The memory_resource from which to allocate blocks for the pool.
* @param initial_pool_size Minimum size, in bytes, of the initial pool.
* @param maximum_pool_size Maximum size, in bytes, that the pool can grow to. Defaults to all
* of the available memory from the upstream resource.
*/
template <class Optional,
cuda::std::enable_if_t<cuda::std::is_same_v<cuda::std::remove_cvref_t<Optional>,
thrust::optional<std::size_t>>,
int> = 0>
[[deprecated("Use std::optional instead of thrust::optional.")]] //
explicit pool_memory_resource(Upstream& upstream_mr,
std::size_t initial_pool_size,
Optional maximum_pool_size)
: pool_memory_resource(cuda::std::addressof(upstream_mr),
initial_pool_size,
maximum_pool_size.value_or(std::nullopt))
{
}

/**
* @brief Construct a `pool_memory_resource` and allocate the initial device memory pool using
* `upstream_mr`.
Expand Down
55 changes: 0 additions & 55 deletions include/rmm/thrust_rmm_allocator.h

This file was deleted.

12 changes: 12 additions & 0 deletions python/rmm/docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,18 @@
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = []

# List of warnings to suppress
suppress_warnings = []

# if the file deprecated.xml does not exist in the doxygen xml output,
# breathe will fail to build the docs, so we conditionally add
# "deprecated.rst" to the exclude_patterns list
if not os.path.exists(
os.path.join(breathe_projects["librmm"], "deprecated.xml")
):
exclude_patterns.append("librmm_docs/deprecated.rst")
suppress_warnings.append("toc.excluded")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this for? I'd guess when you exclude things it generates an extra file and then it warns about that file not being included anywhere, or something like that?

Copy link
Member Author

@harrism harrism Apr 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because deprecated is listed in the toc in the main RST file. This lets us leave it there even though the deprecated.rst file is excluded. Without this suppression you get a warning that an excluded file is in the toc.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't have a better way to conditionally remove it from the toc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah OK I see. Yeah that seems fine then.


# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"

Expand Down
Loading