Skip to content

Commit

Permalink
Try using pycon for Python console highlighting.
Browse files Browse the repository at this point in the history
  • Loading branch information
bdice authored Aug 23, 2022
1 parent bf45e0f commit 1f70d63
Showing 1 changed file with 9 additions and 9 deletions.
18 changes: 9 additions & 9 deletions python/docs/basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,15 +33,15 @@ A DeviceBuffer represents an **untyped, uninitialized device memory
allocation**. DeviceBuffers can be created by providing the
size of the allocation in bytes:

```python
```pycon
>>> import rmm
>>> buf = rmm.DeviceBuffer(size=100)
```

The size of the allocation and the memory address associated with it
can be accessed via the `.size` and `.ptr` attributes respectively:

```python
```pycon
>>> buf.size
100
>>> buf.ptr
Expand All @@ -50,7 +50,7 @@ can be accessed via the `.size` and `.ptr` attributes respectively:

DeviceBuffers can also be created by copying data from host memory:

```python
```pycon
>>> import rmm
>>> import numpy as np
>>> a = np.array([1, 2, 3], dtype='float64')
Expand All @@ -62,7 +62,7 @@ DeviceBuffers can also be created by copying data from host memory:
Conversely, the data underlying a DeviceBuffer can be copied to the
host:

```python
```pycon
>>> np.frombuffer(buf.tobytes())
array([1., 2., 3.])
```
Expand All @@ -83,7 +83,7 @@ used to set a different MemoryResource for the current CUDA device. For
example, enabling the `ManagedMemoryResource` tells RMM to use
`cudaMallocManaged` instead of `cudaMalloc` for allocating memory:

```python
```pycon
>>> import rmm
>>> rmm.mr.set_current_device_resource(rmm.mr.ManagedMemoryResource())
```
Expand All @@ -100,7 +100,7 @@ below shows how to construct a PoolMemoryResource with an initial size
of 1 GiB and a maximum size of 4 GiB. The pool uses
`CudaMemoryResource` as its underlying ("upstream") memory resource:

```python
```pycon
>>> import rmm
>>> pool = rmm.mr.PoolMemoryResource(
... rmm.mr.CudaMemoryResource(),
Expand All @@ -112,7 +112,7 @@ of 1 GiB and a maximum size of 4 GiB. The pool uses

Similarly, to use a pool of managed memory:

```python
```pycon
>>> import rmm
>>> pool = rmm.mr.PoolMemoryResource(
... rmm.mr.ManagedMemoryResource(),
Expand All @@ -137,7 +137,7 @@ You can configure [CuPy](https://cupy.dev/) to use RMM for memory
allocations by setting the CuPy CUDA allocator to
`rmm_cupy_allocator`:

```python
```pycon
>>> import rmm
>>> import cupy
>>> cupy.cuda.set_allocator(rmm.rmm_cupy_allocator)
Expand All @@ -158,7 +158,7 @@ This can be done in two ways:

2. Using the `set_memory_manager()` function provided by Numba:

```python
```pycon
>>> from numba import cuda
>>> import rmm
>>> cuda.set_memory_manager(rmm.RMMNumbaManager)
Expand Down

0 comments on commit 1f70d63

Please sign in to comment.