Skip to content

Commit

Permalink
Add nx-cugraph Docs Pages (#4669)
Browse files Browse the repository at this point in the history
Closes rapidsai/graph_dl#606 and [another
issue]

## Proposed Changes
In preparation for GA release, this PR adds a landing page for
`nx-cugraph` in the cugraph API documentation site. The new pages can be
viewed by clicking `nx-cugraph` in the navigation bar at the top of the
page.

### New pages
nx-cugraph
 └─ How it works
 └─ Supported Algorithms
 └─ Getting Started
 └─ Benchmarks
 └─ FAQ

## Notes for Reviewers

- In order to build and test these docs, I modified the `build.sh` file
to use `sphinx-autobuild`.

```bash
122
123     cd ${REPODIR}/docs/cugraph-docs
124     #make html
125     sphinx-autobuild source build/html
126 fi
127
```

- For now, I believe the best way to view these changes is to clone the
PR branch, then run `build.sh` in order to host the webserver locally..

---------

Co-authored-by: Don Acosta <[email protected]>
Co-authored-by: rlratzel <[email protected]>
Co-authored-by: Erik Welch <[email protected]>
  • Loading branch information
4 people authored Oct 3, 2024
1 parent 9b107b9 commit a936327
Show file tree
Hide file tree
Showing 11 changed files with 599 additions and 60 deletions.
Binary file added docs/cugraph/source/_static/bc_benchmark.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/cugraph/source/_static/colab.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 28 additions & 0 deletions docs/cugraph/source/nx_cugraph/benchmarks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Benchmarks

## NetworkX vs. nx-cugraph
We ran several commonly used graph algorithms on both `networkx` and `nx-cugraph`. Here are the results


<figure>

![bench-image](../_static/bc_benchmark.png)

<figcaption style="text-align: center;">Results from running this <a
href="https://github.com/rapidsai/cugraph/blob/HEAD/benchmarks/nx-cugraph/pytest-based/bench_algos.py">Benchmark</a><span
class="title-ref"></span></figcaption>
</figure>

## Reproducing Benchmarks

Below are the steps to reproduce the results on your workstation. These are documented in this [README](https://github.com/rapidsai/cugraph/blob/HEAD/benchmarks/nx-cugraph/pytest-based).

1. Clone the latest <https://github.com/rapidsai/cugraph>

2. Follow the instructions to build an environment

3. Activate the environment

4. Install the latest `nx-cugraph` by following the [guide](installation.md)

5. Follow the instructions written in the README here: `cugraph/benchmarks/nx-cugraph/pytest-based/`
5 changes: 5 additions & 0 deletions docs/cugraph/source/nx_cugraph/faqs.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# FAQ

> **1. Is `nx-cugraph` able to run across multiple GPUs?**
nx-cugraph currently does not support multi-GPU. Multi-GPU support may be added to a future release of nx-cugraph, but consider [cugraph](https://docs.rapids.ai/api/cugraph/stable) for multi-GPU accelerated graph analytics in Python today.
114 changes: 114 additions & 0 deletions docs/cugraph/source/nx_cugraph/how-it-works.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# How it Works

NetworkX has the ability to **dispatch function calls to separately-installed third-party backends**.

NetworkX backends let users experience improved performance and/or additional functionality without changing their NetworkX Python code. Examples include backends that provide algorithm acceleration using GPUs, parallel processing, graph database integration, and more.

While NetworkX is a pure-Python implementation with minimal to no dependencies, backends may be written in other languages and require specialized hardware and/or OS support, additional software dependencies, or even separate services. Installation instructions vary based on the backend, and additional information can be found from the individual backend project pages listed in the NetworkX Backend Gallery.


![nxcg-execution-flow](../_static/nxcg-execution-diagram.jpg)

## Enabling nx-cugraph

NetworkX will use nx-cugraph as the graph analytics backend if any of the
following are used:

### `NETWORKX_BACKEND_PRIORITY` environment variable.

The `NETWORKX_BACKEND_PRIORITY` environment variable can be used to have NetworkX automatically dispatch to specified backends. This variable can be set to a single backend name, or a comma-separated list of backends ordered using the priority which NetworkX should try. If a NetworkX function is called that nx-cugraph supports, NetworkX will redirect the function call to nx-cugraph automatically, or fall back to the next backend in the list if provided, or run using the default NetworkX implementation. See [NetworkX Backends and Configs](https://networkx.org/documentation/stable/reference/backends.html).

For example, this setting will have NetworkX use nx-cugraph for any function called by the script supported by nx-cugraph, and the default NetworkX implementation for all others.
```
bash> NETWORKX_BACKEND_PRIORITY=cugraph python my_networkx_script.py
```

This example will have NetworkX use nx-cugraph for functions it supports, then try other_backend if nx-cugraph does not support them, and finally the default NetworkX implementation if not supported by either backend:
```
bash> NETWORKX_BACKEND_PRIORITY="cugraph,other_backend" python my_networkx_script.py
```

### `backend=` keyword argument

To explicitly specify a particular backend for an API, use the `backend=`
keyword argument. This argument takes precedence over the
`NETWORKX_BACKEND_PRIORITY` environment variable. This requires anyone
running code that uses the `backend=` keyword argument to have the specified
backend installed.

Example:
```python
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```

### Type-based dispatching

NetworkX also supports automatically dispatching to backends associated with
specific graph types. Like the `backend=` keyword argument example above, this
requires the user to write code for a specific backend, and therefore requires
the backend to be installed, but has the advantage of ensuring a particular
behavior without the potential for runtime conversions.

To use type-based dispatching with nx-cugraph, the user must import the backend
directly in their code to access the utilities provided to create a Graph
instance specifically for the nx-cugraph backend.

Example:
```python
import networkx as nx
import nx_cugraph as nxcg

G = nx.Graph()
...
nxcg_G = nxcg.from_networkx(G) # conversion happens once here
nx.betweenness_centrality(nxcg_G, k=1000) # nxcg Graph type causes cugraph backend
# to be used, no conversion necessary
```

## Command Line Example

---

Create `bc_demo.ipy` and paste the code below.

```python
import pandas as pd
import networkx as nx

url = "https://data.rapids.ai/cugraph/datasets/cit-Patents.csv"
df = pd.read_csv(url, sep=" ", names=["src", "dst"], dtype="int32")
G = nx.from_pandas_edgelist(df, source="src", target="dst")

%time result = nx.betweenness_centrality(G, k=10)
```
Run the command:
```
user@machine:/# ipython bc_demo.ipy
```

You will observe a run time of approximately 7 minutes...more or less depending on your CPU.

Run the command again, this time specifying cugraph as the NetworkX backend.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
This run will be much faster, typically around 20 seconds depending on your GPU.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph. Caching is enabled by default for NetworkX versions 3.4 and later, but if using an older version, set "NETWORKX_CACHE_CONVERTED_GRAPHS=True"
```
NETWORKX_BACKEND_PRIORITY=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
```

When running Python interactively, the cugraph backend can be specified as an argument in the algorithm call.

For example:
```
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```


The latest list of algorithms supported by nx-cugraph can be found [here](https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#algorithms) or in the next section.

---
49 changes: 44 additions & 5 deletions docs/cugraph/source/nx_cugraph/index.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,48 @@
===============================
nxCugraph as a NetworkX Backend
===============================
nx-cugraph
-----------

nx-cugraph is a `NetworkX backend <https://networkx.org/documentation/stable/reference/utils.html#backends>`_ that provides **GPU acceleration** to many popular NetworkX algorithms.

By simply `installing and enabling nx-cugraph <https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#install>`_, users can see significant speedup on workflows where performance is hindered by the default NetworkX implementation. With ``nx-cugraph``, users can have GPU-based, large-scale performance **without** changing their familiar and easy-to-use NetworkX code.

.. code-block:: python
import pandas as pd
import networkx as nx
url = "https://data.rapids.ai/cugraph/datasets/cit-Patents.csv"
df = pd.read_csv(url, sep=" ", names=["src", "dst"], dtype="int32")
G = nx.from_pandas_edgelist(df, source="src", target="dst")
%time result = nx.betweenness_centrality(G, k=10)
.. figure:: ../_static/colab.png
:width: 200px
:target: https://nvda.ws/4drM4re

Try it on Google Colab!


+------------------------------------------------------------------------------------------------------------------------+
| **Zero Code Change Acceleration** |
| |
| Just ``nx.config.backend_priority=["cugraph"]`` in Jupyter, or set ``NETWORKX_BACKEND_PRIORITY=cugraph`` in the shell. |
+------------------------------------------------------------------------------------------------------------------------+
| **Run the same code on CPU or GPU** |
| |
| Nothing changes, not even your `import` statements, when going from CPU to GPU. |
+------------------------------------------------------------------------------------------------------------------------+


``nx-cugraph`` is now Generally Available (GA) as part of the ``RAPIDS`` package. See `RAPIDS
Quick Start <https://rapids.ai/#quick-start>`_ to get up-and-running with ``nx-cugraph``.

.. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Contents:

nx_cugraph.md
how-it-works
supported-algorithms
installation
benchmarks
faqs
50 changes: 50 additions & 0 deletions docs/cugraph/source/nx_cugraph/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Getting Started

This guide describes how to install ``nx-cugraph`` and use it in your workflows.


## System Requirements

`nx-cugraph` requires the following:

- **Volta architecture or later NVIDIA GPU, with [compute capability](https://developer.nvidia.com/cuda-gpus) 7.0+**
- **[CUDA](https://docs.nvidia.com/cuda/index.html) 11.2, 11.4, 11.5, 11.8, 12.0, 12.2, or 12.5**
- **Python >= 3.10**
- **[NetworkX](https://networkx.org/documentation/stable/install.html#) >= 3.0 (version 3.2 or higher recommended)**

More details about system requirements can be found in the [RAPIDS System Requirements Documentation](https://docs.rapids.ai/install#system-req).

## Installing nx-cugraph

Read the [RAPIDS Quick Start Guide](https://docs.rapids.ai/install) to learn more about installing all RAPIDS libraries.

`nx-cugraph` can be installed using conda or pip. It is included in the RAPIDS metapackage, or can be installed separately.

### Conda
**Nightly version**
```bash
conda install -c rapidsai-nightly -c conda-forge -c nvidia nx-cugraph
```

**Stable version**
```bash
conda install -c rapidsai -c conda-forge -c nvidia nx-cugraph
```

### pip
**Nightly version**
```bash
pip install nx-cugraph-cu11 --extra-index-url https://pypi.anaconda.org/rapidsai-wheels-nightly/simple
```

**Stable version**
```bash
pip install nx-cugraph-cu11 --extra-index-url https://pypi.nvidia.com
```

<div style="border: 1px solid #ccc; background-color: #f9f9f9; padding: 10px; border-radius: 5px;">

**Note:**
- The `pip install` examples above are for CUDA 11. To install for CUDA 12, replace `-cu11` with `-cu12`

</div>
57 changes: 3 additions & 54 deletions docs/cugraph/source/nx_cugraph/nx_cugraph.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,10 @@
### nx_cugraph


nx-cugraph is a [NetworkX
backend](<https://networkx.org/documentation/stable/reference/utils.html#backends>) that provides GPU acceleration to many popular NetworkX algorithms.

By simply [installing and enabling nx-cugraph](<https://github.com/rapidsai/cugraph/blob/HEAD/python/nx-cugraph/README.md#install>), users can see significant speedup on workflows where performance is hindered by the default NetworkX implementation. With nx-cugraph, users can have GPU-based, large-scale performance without changing their familiar and easy-to-use NetworkX code.

Let's look at some examples of algorithm speedups comparing NetworkX with and without GPU acceleration using nx-cugraph.

Each chart has three measurements.
* NX - default NetworkX, no GPU acceleration
* nx-cugraph - GPU-accelerated NetworkX using nx-cugraph. This involves an internal conversion/transfer of graph data from CPU to GPU memory
* nx-cugraph (preconvert) - GPU-accelerated NetworkX using nx-cugraph with the graph data pre-converted/transferred to GPU
`nx-cugraph` is a [networkX backend](<https://networkx.org/documentation/stable/reference/utils.html#backends>) that accelerates many popular NetworkX functions using cuGraph and NVIDIA GPUs.
Users simply [install and enable nx-cugraph](installation.md) to experience GPU speedups.

Lets look at some examples of algorithm speedups comparing CPU based NetworkX to dispatched versions run on GPU with nx_cugraph.

![Ancestors](../images/ancestors.png)
![BFS Tree](../images/bfs_tree.png)
Expand All @@ -22,46 +14,3 @@ Each chart has three measurements.
![Pagerank](../images/pagerank.png)
![Single Source Shortest Path](../images/sssp.png)
![Weakly Connected Components](../images/wcc.png)

### Command line example
Open bc_demo.ipy and paste the code below.

```
import pandas as pd
import networkx as nx
url = "https://data.rapids.ai/cugraph/datasets/cit-Patents.csv"
df = pd.read_csv(url, sep=" ", names=["src", "dst"], dtype="int32")
G = nx.from_pandas_edgelist(df, source="src", target="dst")
%time result = nx.betweenness_centrality(G, k=10)
```
Run the command:
```
user@machine:/# ipython bc_demo.ipy
```

You will observe a run time of approximately 7 minutes...more or less depending on your cpu.

Run the command again, this time specifying cugraph as the NetworkX backend.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
This run will be much faster, typically around 20 seconds depending on your GPU.
```
user@machine:/# NETWORKX_BACKEND_PRIORITY=cugraph ipython bc_demo.ipy
```
There is also an option to cache the graph conversion to GPU. This can dramatically improve performance when running multiple algorithms on the same graph.
```
NETWORKX_BACKEND_PRIORITY=cugraph NETWORKX_CACHE_CONVERTED_GRAPHS=True ipython bc_demo.ipy
```

When running Python interactively, the cugraph backend can be specified as an argument in the algorithm call.

For example:
```
nx.betweenness_centrality(cit_patents_graph, k=k, backend="cugraph")
```


The latest list of algorithms supported by nx-cugraph can be found [here](https://github.com/rapidsai/cugraph/blob/main/python/nx-cugraph/README.md#algorithms).
Loading

0 comments on commit a936327

Please sign in to comment.