Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

Commit

Permalink
update profiler tutorial (#15580)
Browse files Browse the repository at this point in the history
* update profiler tutorial

* Update profiler.md

* Update profiler.md

* Update profiler.md

* Re-Trigger build

* Re-Trigger build

* Re-Trigger build

* Update profiler.md
  • Loading branch information
Zha0q1 authored and apeforest committed Jul 26, 2019
1 parent 5e6ba7b commit c310763
Showing 1 changed file with 15 additions and 12 deletions.
27 changes: 15 additions & 12 deletions docs/tutorials/python/profiler.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,10 +195,10 @@ print(profiler.dumps())
You can also dump the information collected by the profiler into a `json` file using the `profiler.dump()` function and view it in a browser.

```python
profiler.dump()
profiler.dump(finished=False)
```

`dump()` creates a `json` file which can be viewed using a trace consumer like `chrome://tracing` in the Chrome browser. Here is a snapshot that shows the output of the profiling we did above.
`dump()` creates a `json` file which can be viewed using a trace consumer like `chrome://tracing` in the Chrome browser. Here is a snapshot that shows the output of the profiling we did above. Note that setting the `finished` parameter to `False` will prevent the profiler from finishing dumping to file. If you just use `profiler.dump()`, you will no longer be able to profile the remaining sections of your model.

![Tracing Screenshot](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_output_chrome.png)

Expand All @@ -214,11 +214,6 @@ Should the existing NDArray operators fail to meet all your model's needs, MXNet
Let's try profiling custom operators with the following code example:

```python

import mxnet as mx
from mxnet import nd
from mxnet import profiler

class MyAddOne(mx.operator.CustomOp):
def forward(self, is_train, req, in_data, out_data, aux):
self.assign(out_data[0], req[0], in_data[0]+1)
Expand Down Expand Up @@ -246,15 +241,17 @@ class CustomAddOneProp(mx.operator.CustomOpProp):

inp = mx.nd.zeros(shape=(500, 500))

profiler.set_config(profile_all=True, continuous_dump = True)
profiler.set_config(profile_all=True, continuous_dump=True, \
aggregate_stats=True)
profiler.set_state('run')

w = nd.Custom(inp, op_type="MyAddOne")

mx.nd.waitall()

profiler.set_state('stop')
profiler.dump()
print(profiler.dumps())
profiler.dump(finished=False)
```

Here, we have created a custom operator called `MyAddOne`, and within its `forward()` function, we simply add one to the input. We can visualize the dump file in `chrome://tracing/`:
Expand All @@ -267,10 +264,10 @@ Please note that: to be able to see the previously described information, you ne

```python
# Set profile_all to True
profiler.set_config(profile_all=True, aggregate_stats=True, continuous_dump = True)
profiler.set_config(profile_all=True, aggregate_stats=True, continuous_dump=True)
# OR, Explicitly Set profile_symbolic and profile_imperative to True
profiler.set_config(profile_symbolic = True, profile_imperative = True, \
aggregate_stats=True, continuous_dump = True)
profiler.set_config(profile_symbolic=True, profile_imperative=True, \
aggregate_stats=True, continuous_dump=True)

profiler.set_state('run')
# Use Symbolic Mode
Expand All @@ -280,9 +277,15 @@ c = b.bind(mx.cpu(), {'a': inp})
y = c.forward()
mx.nd.waitall()
profiler.set_state('stop')
print(profiler.dumps())
profiler.dump()
```

### Some Rules to Pay Attention to
1. Always use `profiler.dump(finished=False)` if you do not intend to finish dumping to file. Otherwise, calling `profiler.dump()` in the middle of your model may lead to unexpected behaviors; and if you subsequently call `profiler.set_config()`, the program will error out.

2. You can only dump to one file. Do not change the target file by calling `profiler.set_config(filename='new_name.json')` in the middle of your model. This will lead to incomplete dump outputs.

## Advanced: Using NVIDIA Profiling Tools

MXNet's Profiler is the recommended starting point for profiling MXNet code, but NVIDIA also provides a couple of tools for low-level profiling of CUDA code: [NVProf](https://devblogs.nvidia.com/cuda-pro-tip-nvprof-your-handy-universal-gpu-profiler/), [Visual Profiler](https://developer.nvidia.com/nvidia-visual-profiler) and [Nsight Compute](https://developer.nvidia.com/nsight-compute). You can use these tools to profile all kinds of executables, so they can be used for profiling Python scripts running MXNet. And you can use these in conjunction with the MXNet Profiler to see high-level information from MXNet alongside the low-level CUDA kernel information.
Expand Down

0 comments on commit c310763

Please sign in to comment.