diff --git a/docs/tutorials/python/profiler.md b/docs/tutorials/python/profiler.md index 5be6bf83ac5d..91a74e4f49cf 100644 --- a/docs/tutorials/python/profiler.md +++ b/docs/tutorials/python/profiler.md @@ -195,10 +195,10 @@ print(profiler.dumps()) You can also dump the information collected by the profiler into a `json` file using the `profiler.dump()` function and view it in a browser. ```python -profiler.dump() +profiler.dump(finished=False) ``` -`dump()` creates a `json` file which can be viewed using a trace consumer like `chrome://tracing` in the Chrome browser. Here is a snapshot that shows the output of the profiling we did above. +`dump()` creates a `json` file which can be viewed using a trace consumer like `chrome://tracing` in the Chrome browser. Here is a snapshot that shows the output of the profiling we did above. Note that setting the `finished` parameter to `False` will prevent the profiler from finishing dumping to file. If you just use `profiler.dump()`, you will no longer be able to profile the remaining sections of your model. ![Tracing Screenshot](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/tutorials/python/profiler/profiler_output_chrome.png) @@ -214,11 +214,6 @@ Should the existing NDArray operators fail to meet all your model's needs, MXNet Let's try profiling custom operators with the following code example: ```python - -import mxnet as mx -from mxnet import nd -from mxnet import profiler - class MyAddOne(mx.operator.CustomOp): def forward(self, is_train, req, in_data, out_data, aux): self.assign(out_data[0], req[0], in_data[0]+1) @@ -246,7 +241,8 @@ class CustomAddOneProp(mx.operator.CustomOpProp): inp = mx.nd.zeros(shape=(500, 500)) -profiler.set_config(profile_all=True, continuous_dump = True) +profiler.set_config(profile_all=True, continuous_dump=True, \ + aggregate_stats=True) profiler.set_state('run') w = nd.Custom(inp, op_type="MyAddOne") @@ -254,7 +250,8 @@ w = nd.Custom(inp, op_type="MyAddOne") mx.nd.waitall() profiler.set_state('stop') -profiler.dump() +print(profiler.dumps()) +profiler.dump(finished=False) ``` Here, we have created a custom operator called `MyAddOne`, and within its `forward()` function, we simply add one to the input. We can visualize the dump file in `chrome://tracing/`: @@ -267,10 +264,10 @@ Please note that: to be able to see the previously described information, you ne ```python # Set profile_all to True -profiler.set_config(profile_all=True, aggregate_stats=True, continuous_dump = True) +profiler.set_config(profile_all=True, aggregate_stats=True, continuous_dump=True) # OR, Explicitly Set profile_symbolic and profile_imperative to True -profiler.set_config(profile_symbolic = True, profile_imperative = True, \ - aggregate_stats=True, continuous_dump = True) +profiler.set_config(profile_symbolic=True, profile_imperative=True, \ + aggregate_stats=True, continuous_dump=True) profiler.set_state('run') # Use Symbolic Mode @@ -280,9 +277,15 @@ c = b.bind(mx.cpu(), {'a': inp}) y = c.forward() mx.nd.waitall() profiler.set_state('stop') +print(profiler.dumps()) profiler.dump() ``` +### Some Rules to Pay Attention to +1. Always use `profiler.dump(finished=False)` if you do not intend to finish dumping to file. Otherwise, calling `profiler.dump()` in the middle of your model may lead to unexpected behaviors; and if you subsequently call `profiler.set_config()`, the program will error out. + +2. You can only dump to one file. Do not change the target file by calling `profiler.set_config(filename='new_name.json')` in the middle of your model. This will lead to incomplete dump outputs. + ## Advanced: Using NVIDIA Profiling Tools MXNet's Profiler is the recommended starting point for profiling MXNet code, but NVIDIA also provides a couple of tools for low-level profiling of CUDA code: [NVProf](https://devblogs.nvidia.com/cuda-pro-tip-nvprof-your-handy-universal-gpu-profiler/), [Visual Profiler](https://developer.nvidia.com/nvidia-visual-profiler) and [Nsight Compute](https://developer.nvidia.com/nsight-compute). You can use these tools to profile all kinds of executables, so they can be used for profiling Python scripts running MXNet. And you can use these in conjunction with the MXNet Profiler to see high-level information from MXNet alongside the low-level CUDA kernel information.