Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Numba docs links. #12107

Merged
merged 1 commit into from
Nov 9, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/cudf/source/user_guide/cupy-interop.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
"\n",
"2. We can also use `DataFrame.values`.\n",
"\n",
"3. We can also convert via the [CUDA array interface](https://numba.pydata.org/numba-doc/dev/cuda/cuda_array_interface.html) by using cuDF's `to_cupy` functionality."
"3. We can also convert via the [CUDA array interface](https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html) by using cuDF's `to_cupy` functionality."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions docs/cudf/source/user_guide/guide-to-udfs.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -446,8 +446,8 @@
"id": "00914f2a",
"metadata": {},
"source": [
"In addition to the Series.apply() method for performing custom operations, you can also pass Series objects directly into [CUDA kernels written with Numba](https://numba.pydata.org/numba-doc/latest/cuda/kernels.html).\n",
"Note that this section requires basic CUDA knowledge. Refer to [numba's CUDA documentation](https://numba.pydata.org/numba-doc/latest/cuda/index.html) for details.\n",
"In addition to the Series.apply() method for performing custom operations, you can also pass Series objects directly into [CUDA kernels written with Numba](https://numba.readthedocs.io/en/stable/cuda/kernels.html).\n",
"Note that this section requires basic CUDA knowledge. Refer to [numba's CUDA documentation](https://numba.readthedocs.io/en/stable/cuda/index.html) for details.\n",
"\n",
"The easiest way to write a Numba kernel is to use `cuda.grid(1)` to manage thread indices, and then leverage Numba's `forall` method to configure the kernel for us. Below, define a basic multiplication kernel as an example and use `@cuda.jit` to compile it."
]
Expand Down Expand Up @@ -485,7 +485,7 @@
"source": [
"This kernel will take an input array, multiply it by a configurable value (supplied at runtime), and store the result in an output array. Notice that we wrapped our logic in an `if` statement. Because we can launch more threads than the size of our array, we need to make sure that we don't use threads with an index that would be out of bounds. Leaving this out can result in undefined behavior.\n",
"\n",
"To execute our kernel, must pre-allocate an output array and leverage the `forall` method mentioned above. First, we create a Series of all `0.0` in our DataFrame, since we want `float64` output. Next, we run the kernel with `forall`. `forall` requires us to specify our desired number of tasks, so we'll supply in the length of our Series (which we store in `size`). The [__cuda_array_interface__](https://numba.pydata.org/numba-doc/dev/cuda/cuda_array_interface.html) is what allows us to directly call our Numba kernel on our Series."
"To execute our kernel, must pre-allocate an output array and leverage the `forall` method mentioned above. First, we create a Series of all `0.0` in our DataFrame, since we want `float64` output. Next, we run the kernel with `forall`. `forall` requires us to specify our desired number of tasks, so we'll supply in the length of our Series (which we store in `size`). The [__cuda_array_interface__](https://numba.readthedocs.io/en/stable/cuda/cuda_array_interface.html) is what allows us to directly call our Numba kernel on our Series."
]
},
{
Expand Down Expand Up @@ -2624,7 +2624,7 @@
"- Generalized NA UDFs\n",
"\n",
"\n",
"For more information please see the [cuDF](https://docs.rapids.ai/api/cudf/nightly/), [Numba.cuda](https://numba.pydata.org/numba-doc/dev/cuda/index.html), and [CuPy](https://docs-cupy.chainer.org/en/stable/) documentation."
"For more information please see the [cuDF](https://docs.rapids.ai/api/cudf/nightly/), [Numba.cuda](https://numba.readthedocs.io/en/stable/cuda/index.html), and [CuPy](https://docs-cupy.chainer.org/en/stable/) documentation."
]
}
],
Expand Down
4 changes: 2 additions & 2 deletions python/cudf/cudf/core/dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -3942,7 +3942,7 @@ def apply(
``apply`` relies on Numba to JIT compile ``func``.
Thus the allowed operations within ``func`` are limited to `those
supported by the CUDA Python Numba target
<https://numba.pydata.org/numba-doc/latest/cuda/cudapysupported.html>`__.
<https://numba.readthedocs.io/en/stable/cuda/cudapysupported.html>`__.
For more information, see the `cuDF guide to user defined functions
<https://docs.rapids.ai/api/cudf/stable/user_guide/guide-to-udfs.html>`__.

Expand Down Expand Up @@ -4297,7 +4297,7 @@ def apply_chunks(
respectively (See `numba CUDA kernel documentation`_).

.. _numba CUDA kernel documentation:\
http://numba.pydata.org/numba-doc/latest/cuda/kernels.html
https://numba.readthedocs.io/en/stable/cuda/kernels.html

In the example below, the *kernel* is invoked concurrently on each
specified chunk. The *kernel* computes the corresponding output
Expand Down