From 04a00d8eea0f6a3be2887ed72f124bb7bda1473a Mon Sep 17 00:00:00 2001 From: Steve MacLean Date: Mon, 14 Oct 2019 19:16:37 -0400 Subject: [PATCH] Miscellaneous Fix URLs Fix commands --- .../diagnostics/app_is_leaking_memory_eventual_crash.md | 4 ++-- docs/core/diagnostics/app_running_slow_highcpu.md | 2 +- docs/core/diagnostics/hung_app.md | 8 ++++---- 3 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/core/diagnostics/app_is_leaking_memory_eventual_crash.md b/docs/core/diagnostics/app_is_leaking_memory_eventual_crash.md index ea7dde69e2a7c..0d9474ef30de5 100644 --- a/docs/core/diagnostics/app_is_leaking_memory_eventual_crash.md +++ b/docs/core/diagnostics/app_is_leaking_memory_eventual_crash.md @@ -62,7 +62,7 @@ The output should be similar to: Here we can see that right after startup, the managed heap memory is 4 MB. -Now, let's hit the URL http://localhost:5000/api/diagscenario/memleak/200000 +Now, let's hit the URL [http://localhost:5000/api/diagscenario/memleak/200000](http://localhost:5000/api/diagscenario/memleak/200000) Rerun the dotnet-counters command. We should see an increase in memory usage as shown below: @@ -89,7 +89,7 @@ When analyzing possible memory leaks, we need access to the apps memory heap. We Using the previous [Sample debug target](sample-debug-target.md) started above, run the following command to generate a core dump: ```dotnetcli -sudo ./dotnet-dump collect -p 4807 +sudo dotnet-dump collect -p 4807 ``` 4807 is the process ID that can be found using `dotnet-trace list-processes`. The result is a core dump located in the same folder. diff --git a/docs/core/diagnostics/app_running_slow_highcpu.md b/docs/core/diagnostics/app_running_slow_highcpu.md index 92b82e070c5ea..f8f3b9c841fa7 100644 --- a/docs/core/diagnostics/app_running_slow_highcpu.md +++ b/docs/core/diagnostics/app_running_slow_highcpu.md @@ -49,7 +49,7 @@ The output should be similar to the below: Here we can see that right after startup, the CPU isn't being consumed at all (0%). -Now, let's hit the URL (http://localhost:5000/api/diagscenario/highcpu/60000) +Now, let's hit the URL [http://localhost:5000/api/diagscenario/highcpu/60000](http://localhost:5000/api/diagscenario/highcpu/60000) Rerun the [dotnet-counters](dotnet-counters.md) command. We should see an increase in CPU usage as shown below: diff --git a/docs/core/diagnostics/hung_app.md b/docs/core/diagnostics/hung_app.md index 8e90645c25ad5..8c04914e95b1b 100644 --- a/docs/core/diagnostics/hung_app.md +++ b/docs/core/diagnostics/hung_app.md @@ -39,12 +39,12 @@ dotnet-trace list-processes Navigate to the following URL: -http://localhost:5000/api/diagscenario/deadlock +[http://localhost:5000/api/diagscenario/deadlock](http://localhost:5000/api/diagscenario/deadlock) Let the request run for about 10-15 seconds then create the dump: ```dotnetcli -sudo ./dotnet-dump collect -p 4807 +sudo dotnet-dump collect -p 4807 ``` ## Analyzing the core dump @@ -52,13 +52,13 @@ sudo ./dotnet-dump collect -p 4807 To start our investigation, let's open the core dump using dotnet-dump analyze: ```dotnetcli -./dotnet-dump analyze ~/.dotnet/tools/core_20190513_143916 +dotnet-dump analyze ~/.dotnet/tools/core_20190513_143916 ``` Since we're looking at a potential hang, we want an overall feel for the thread activity in the process. We can use the threads command as shown below: ```console -threads +> threads *0 0x1DBFF (121855) 1 0x1DC01 (121857) 2 0x1DC02 (121858)