Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting maximum_execution_time to millisecond precision for under a second causes the program to hang forever on exit #260

Open
yxiang92128 opened this issue May 10, 2019 · 24 comments
Assignees

Comments

@yxiang92128
Copy link

During our negative testing on SLES12SP3, we set the maximum_execution_time for the blob_client to 1 milliseconds and repeatedly execute a test program to trigger and test a timeout condition.
azure::storage::blob_request_options options;
azure::storage::td_retry_policy policy(std::chrono::seconds(1), 4);
options.set_retry_policy(policy);
options.set_server_timeout(std::chrono::seconds(1));
options.set_maximum_execution_time(std::chrono::milliseconds(1));
options.set_parallelism_factor(2);
blob_client = storage_account.create_cloud_blob_client(options);

And the main program when calling return(0) or exit(0) in the end when calling do_global_dtors_aux, it would be blocked by a thread in pplx::details::event_impl::set() and the main program will then hangs for ever.

This is only occurring in 2.10.11 when millisecond precision was introduced and the workaround for us is to set the maximum_execution_time to 1000 milliseconds and the program then will exit cleanly every time. Using Azure Storage SDK5.0 and C++ REST SDK 2.9.1 won't produce such hang.

Here is the stack trace when the main program hangs:
GNU gdb (GDB; SUSE Linux Enterprise 12) 8.0
Copyright (C) 2017 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-suse-linux".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
http://bugs.opensuse.org/.
Find the GDB manual and other documentation resources online at:
http://www.gnu.org/software/gdb/documentation/.
For help, type "help".
Type "apropos word" to search for commands related to "word".
Attaching to process 13824
[New LWP 13825]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f721ea0899d in pthread_join () from /lib64/libpthread.so.0
Missing separate debuginfos, use: zypper install glibc-debuginfo-2.22-61.3.x86_64 libgcc_s1-debuginfo-6.2.1+r239768-2.4.x86_64 libglib-2_0-0-debuginfo-2.48.2-10.2.x86_64 libglibmm-2_4-1-debuginfo-2.48.1-5.5.x86_64 libgmodule-2_0-0-debuginfo-2.48.2-10.2.x86_64 libgobject-2_0-0-debuginfo-2.48.2-10.2.x86_64 liblzma5-debuginfo-5.0.5-4.852.x86_64 libopenssl1_0_0-debuginfo-1.0.2j-60.11.2.x86_64 libpcre1-debuginfo-8.39-7.1.x86_64 libsigc-2_0-0-debuginfo-2.8.0-6.2.x86_64 libstdc++6-debuginfo-6.2.1+r239768-2.4.x86_64 libuuid1-debuginfo-2.29.2-2.3.x86_64 libxml2-2-debuginfo-2.9.4-46.3.2.x86_64 libz1-debuginfo-1.2.8-11.1.x86_64
(gdb) info threads
Id Target Id Frame

Can somebody please take a look at the above and see if the 1 millisecond timeout would cause issue in this scenario and worse blocking other normal operation(s)?

Thanks,

Yang

@yxiang92128
Copy link
Author

Additional notes:
Couple of experiments had been set up afterwards in our SLES12SP3 environment and here are the findings:

  1. Only by setting the maximum_execution_time to sub second when create_cloud_blob_client and in our case 1 millisecond, it would hang, e.g. reqOptions.set_maximum_execution_time(std::chrono::milliseconds(1));
  2. Using C++ REST SDK 2.9.1 and Azure Storage C++ SDK 5.0, it won’t hang which makes sense because in the older version the millisecond precision was not introduced.
  3. The main thread seems to be blocked at pthread_join and the other thread is calling notify_all to wake other threads up so the timing is critical here and it does not happen often but when it happens it will be stuck.
  4. By calling “_exit(0)” without triggering global destructors, the program would then exit without hanging which also makes sense.
  5. For now, we enforced the policy that our SLA of maximum execution time to be 1 seconds or more in our products. But moving forwards, our SLA needs to be improved and that workaround might not work for us in the long run.
  6. Would this new millisecond precision introduction cause other issues?

@yxiang92128
Copy link
Author

Could it be the timer handler started and expired before going out of the scope?

@katmsft
Copy link
Member

katmsft commented May 14, 2019

Hi, thanks for reporting the issue and provide detailed information. we will take some time investigating on it. Please stay tuned.

@katmsft katmsft self-assigned this May 29, 2019
@katmsft
Copy link
Member

katmsft commented Jul 24, 2019

With the given information I tried to run with this code for a reproduce on SLES12sp3 but failed. The version I used: azure-storage-cpp 6.1.0 + cpprestsdk 2.10.14. The script I used:

#bash
let ITERATION=0;
while true; do
let ITERATION=ITERATION+1
echo "--------------------------------------------------------------------"
echo "Running application for iteration $ITERATION "
echo "--------------------------------------------------------------------"
./Binaries/samplesblobs
done

Can you take a look and see if this code can trigger the issue in your environment?
Also, can you provide below information:

  1. What is the version of the boost that you are using. (Since on Linux, timeout feature requires boost::asio::basic_waitable_timer).
  2. How frequent is this issue? I ran more than several tens of thousands of iterations and have seen none.
  3. Is it only in the main thread that you are calling azure-storage-cpp? If so, can you elaborate on below comments? To my knowledge, azure-storage-cpp calls cpprestsdk's async method that requires one thread, and the timer itself requires one thread, together with the main thread that's 3 threads. What does 'other threads' stands for?

The main thread seems to be blocked at pthread_join and the other thread is calling notify_all to wake other threads up so the timing is critical here and it does not happen often but when it happens it will be stuck.

  1. As you can see from source code, timer_handler does not have any 'global' state, which shouldn't be influenced by global destructor, why do you think they are related? The timer will only exist at the life time of a sync API call or once an async API has been waited.

Currently we have test cases to cover the scenarios of millisecond timeout. If it is not enough and you have concern of the coverage, please feel free to raise it and we will carefully evaluate it.

@yxiang92128
Copy link
Author

I am using sdk 6.0.0 and cpprestsdk 2.10.11. I will try your code and script to reproduce in both of my environment and the build environment you've suggested to see if would happen.

Thanks,
Yang

@katmsft
Copy link
Member

katmsft commented Aug 16, 2019

Hi, can you take a look at PR #277 ? It resolves this issue at my end.

@yxiang92128
Copy link
Author

yxiang92128 commented Oct 8, 2019

@katmsft @JinmingHu-MSFT
Hi guys,

I've just discovered a memory leak detected by valgrind runnning a test program which runs in a loop listing the blobs within a container. See attached valgrind output.
Without PR#277, there is no leak detected, with the patch, there are various timer_handler related leaks and in the end, there are significant amount of mem leaks (in hundreds of megabytes) running for about ten thousand iterations.
Can you please take a look and let us know if there are any quick solution? Because that PR is in the latest release, I feel it might be an indication of a much broader issue.

Thanks as always,

Yang

memleak_timer_handler_patch.txt

@yxiang92128
Copy link
Author

Just ran a test which reads a single object for many iterations and it revealed the same memory leak by the timer patch PR#277. See attached.
memleak_1000_readone_timerpatch.txt

@Jinming-Hu
Copy link
Member

@yxiang92128 Thanks for your feedback, I'll take time to see it.

@Jinming-Hu
Copy link
Member

@yxiang92128 Hi, I cannot reproduce the memory leak in both scenarios you mentioned with 7.0.0. Can you share source code of your test program?

@yxiang92128
Copy link
Author

yxiang92128 commented Oct 9, 2019

@JinmingHu-MSFT
Please see attached test program "test_azure.cpp" and a compile script "compile_test_azure".
(renamed to txt suffix for uploading)
I ran it as the following:
valgrind --leak-check=full --track-origins=yes ./test_azure 1000 10 >& azure_memleak.txt

There are also two valgrind output files also attached:
azure_memleak.txt
azure_noleak_without_setmaxtime.txt

What I found is that without the following lines in the test_azure.cpp code, valgrind does NOT show any leaks. Perhaps if the maximum_execution_time is set to be a big value, then the leak would occur?
// Yang Adding this line would cause the mem leak
// without this line no leak
reqOptions.set_maximum_execution_time(std::chrono::milliseconds(10000));

Moreover, we are currently still on 6.1.0 release with the PR277 patch. The new 7.0.0 release had added the requirement for GCC 5.1+ and in our SLES12SP3 environment, the newest version of GCC we currently support is GCC 4.8.5 which would add difficulty for us to adopt to your new features and fixes in the future.

Hope it helps you in reproducing the leaks indicated.

Thanks again for your help.

Yang

compile_test_azure.txt
test_azure.cpp.txt

@Jinming-Hu
Copy link
Member

@yxiang92128 Hi, now I can reproduce the issue. It's because the timer isn't stopped if the operation is finished before timeout.

Since you set the maximum execution time to 10 seconds, if you let the program sleep like 11 seconds before exit, the leak is gone.

@yxiang92128
Copy link
Author

@JinmingHu-MSFT if the operation is finished, shouldn't the timer be stopped? Without the PR277, it didn't function like this and leaked memory and if I have multiple requests going on, they seem to pile up the leaks. Can you see if you can find a fix for it? Thanks.

@Jinming-Hu
Copy link
Member

@yxiang92128 PR #277 introduced this issue. I'm now trying to fix it.

@Jinming-Hu
Copy link
Member

@yxiang92128 Your account name and key leaked in azure_memleak.txt and azure_noleak_without_setmaxtime.txt. I deleted these two files, please revoke the key ASAP.

@yxiang92128
Copy link
Author

@JinmingHu-MSFT changed the secret key. thanks for letting us know.

@Jinming-Hu
Copy link
Member

@yxiang92128 Hi, can you give this PR #297 a try, verifying it fixes the memory leak and doesn't bring back the hang issue?

@yxiang92128
Copy link
Author

@JinmingHu-MSFT
Unfortunately your patch would not work for our v6.0.0 based code, it complains about the following:
Scanning dependencies of target azurestorage
[ 1%] Building CXX object src/CMakeFiles/azurestorage.dir/timer_handler.cpp.o
/home/yang/azure-storage-cpp-6.0.0/Microsoft.WindowsAzure.Storage/src/timer_handler.cpp: In constructor ‘azure::storage::core::timer_handler::timer_handler(const pplx::cancellation_token&)’:
/home/yang/azure-storage-cpp-6.0.0/Microsoft.WindowsAzure.Storage/src/timer_handler.cpp:26:44: error: no match for ‘operator=’ (operand types are ‘pplx::cancellation_token_source’ and ‘std::shared_ptrpplx::cancellation_token_source’)
m_worker_cancellation_token_source = std::make_sharedpplx::cancellation_token_source();
^
/home/yang/azure-storage-cpp-6.0.0/Microsoft.WindowsAzure.Storage/src/timer_handler.cpp:26:44: note: candidates are:
In file included from /usr/local/include/pplx/pplx.h:53:0,
from /usr/local/include/pplx/pplxtasks.h:61,
from /usr/local/include/cpprest/asyncrt_utils.h:17,
from /usr/local/include/cpprest/http_client.h:47,
from /home/yang/azure-storage-cpp-6.0.0/Microsoft.WindowsAzure.Storage/includes/stdafx.h:48,
from /home/yang/azure-storage-cpp-6.0.0/Microsoft.WindowsAzure.Storage/src/timer_handler.cpp:18:
/usr/local/include/pplx/pplxcancellation_token.h:779:32: note: pplx::cancellation_token_source& pplx::cancellation_token_source::operator=(const pplx::cancellation_token_source&)
cancellation_token_source& operator=(const cancellation_token_source& _Src)
^
/usr/local/include/pplx/pplxcancellation_token.h:779:32: note: no known conversion for argument 1 from ‘std::shared_ptrpplx::cancellation_token_source’ to ‘const pplx::cancellation_token_source&’
/usr/local/include/pplx/pplxcancellation_token.h:789:32: note: pplx::cancellation_token_source& pplx::cancellation_token_source::operator=(pplx::cancellation_token_source&&)
cancellation_token_source& operator=(cancellation_token_source&& _Src)
^
/usr/local/include/pplx/pplxcancellation_token.h:789:32: note: no known conversion for argument 1 from ‘std::shared_ptrpplx::cancellation_token_source’ to ‘pplx::cancellation_token_source&&’
src/CMakeFiles/azurestorage.dir/build.make:62: recipe for target 'src/CMakeFiles/azurestorage.dir/timer_handler.cpp.o' failed
make[2]: *** [src/CMakeFiles/azurestorage.dir/timer_handler.cpp.o] Error 1
CMakeFiles/Makefile2:85: recipe for target 'src/CMakeFiles/azurestorage.dir/all' failed
make[1]: *** [src/CMakeFiles/azurestorage.dir/all] Error 2
Makefile:138: recipe for target 'all' failed
make: *** [all] Error 2

@Jinming-Hu
Copy link
Member

I just tried to apply the patches to v6.0.0, there were some conflicts, and maybe you didn't handle them correctly. Give this branch a try https://github.com/JinmingHu-MSFT/azure-storage-cpp/tree/fix260-6.0.0. It should fix the memory leak issue.

Please note that there is still some timer_handler related hang/crash issue. I'm still working on it.

@yxiang92128
Copy link
Author

yxiang92128 commented Oct 17, 2019

@JinmingHu-MSFT
Tried your private branch and it showed great improvement but still 32 byte lost in the end, no big deal though. This is a test which it does 2000 list operations in succession. I think download works now without the hanging timer handle.

==32326== 1,192 (32 direct, 1,160 indirect) bytes in 1 blocks are definitely lost in loss record 278 of 287
==32326== at 0x4C29780: operator new(unsigned long) (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==32326== by 0x51D7EE7: pplx::details::_ScheduleFuncWithAutoInline(std::function<void ()> const&, pplx::details::_TaskInliningMode) (pplxtasks.h:627)
==32326== by 0x5211FA4: pplx::details::_Task_impl::_CancelAndRunContinuations(bool, bool, bool, std::shared_ptrpplx::details::_ExceptionHolder const&) (pplxtasks.h:2517)
==32326== by 0x51E0E7F: pplx::details::_Task_impl_base::_Cancel(bool) (pplxtasks.h:1872)
==32326== by 0x51EE8B3: pplx::task_completion_event::_CancelInternal() const (pplxtasks.h:2899)
==32326== by 0x91D52B0: azure::storage::core::timer_handler::stop_timer() (in /lib/libazurestorage.so.6)
==32326== by 0x91D531D: azure::storage::core::timer_handler::~timer_handler() (in /opt/teradata/tdobject/lib/libazurestorage.so.6)
==32326== by 0x51C762F: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() (shared_ptr_base.h:144)
==32326== by 0x928332E: azure::storage::core::storage_command_base::storage_command_base() (in /lib/libazurestorage.so.6)
==32326== by 0x938984A: std::_Sp_counted_ptr_inplace<azure::storage::core::storage_command<azure::storage::result_segmentazure::storage::list_blob_item >, std::allocator<azure::storage::core::storage_command<azure::storage::result_segmentazure::storage::list_blob_item > >, (__gnu_cxx::_Lock_policy)2>::_M_dispose() (in /lib/libazurestorage.so.6)
==32326== by 0x51C762F: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() (shared_ptr_base.h:144)
==32326== by 0x9386C5F: pplx::task::_ContinuationTaskHandle<void, azure::storage::result_segmentazure::storage::list_blob_item, azure::storage::core::executor<azure::storage::result_segmentazure::storage::list_blob_item >::execute_async(std::shared_ptr<azure::storage::core::storage_command<azure::storage::result_segmentazure::storage::list_blob_item > >, azure::storage::request_options const&, azure::storage::operation_context)::{lambda(pplx::task)#1}, std::integral_constant<bool, true>, pplx::details::_TypeSelectorNoAsync>::
_ContinuationTaskHandle() (in /lib/libazurestorage.so.6)
==32326==
==32326== LEAK SUMMARY:
==32326== definitely lost: 32 bytes in 1 blocks
==32326== indirectly lost: 1,160 bytes in 7 blocks
==32326== possibly lost: 1,352 bytes in 18 blocks

@Jinming-Hu
Copy link
Member

@yxiang92128 Hi, I observed the same leak issue as you did. This is because after the timer is stopped, cpprestsdk starts another task to do some post-process. Inside cpprestsdk there's a thread pool to run these tasks. It takes some time to get the task scheduled and finished.

If you have your program sleep like 20ms before exit, the leak is gone. Since this memory leak is not that severe and is recycled very soon and doesn't pile up, we decide not to fix it for now.

@yxiang92128
Copy link
Author

@JinmingHu-MSFT
thanks for the reply. Talking about the thread pool, I always see over 40 threads created when a download starts, is this the way how cpprestsdk threadpool works by preallocating the service thread? If so, is there a way to tune the threads it tries to preallocate because in our application we are trying to reduce the therad count. See the following backtrace:
(gdb) info thread
Id Target Id Frame

  • 1 Thread 0x7ffff7fce7c0 (LWP 22831) "azuretest" 0x00007ffff1e4705d in nanosleep () from /lib64/libc.so.6
    2 Thread 0x7ffff10cc700 (LWP 22832) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    3 Thread 0x7ffff08cb700 (LWP 22833) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    4 Thread 0x7ffff00ca700 (LWP 22834) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    5 Thread 0x7fffef8c9700 (LWP 22835) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    6 Thread 0x7fffef0c8700 (LWP 22836) "azuretest" 0x00007ffff1e77083 in epoll_wait () from /lib64/libc.so.6
    7 Thread 0x7fffee8c7700 (LWP 22837) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    8 Thread 0x7fffee0c6700 (LWP 22838) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    9 Thread 0x7fffed8c5700 (LWP 22839) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    10 Thread 0x7fffed0c4700 (LWP 22840) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    11 Thread 0x7fffec8c3700 (LWP 22841) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    12 Thread 0x7fffec0c2700 (LWP 22842) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    13 Thread 0x7fffeb8c1700 (LWP 22843) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    14 Thread 0x7fffeb0c0700 (LWP 22844) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    15 Thread 0x7fffea8bf700 (LWP 22845) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    16 Thread 0x7fffea0be700 (LWP 22846) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    17 Thread 0x7fffe98bd700 (LWP 22847) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    18 Thread 0x7fffe90bc700 (LWP 22848) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    19 Thread 0x7fffe88bb700 (LWP 22849) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    20 Thread 0x7fffe80ba700 (LWP 22850) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    21 Thread 0x7fffe78b9700 (LWP 22851) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    22 Thread 0x7fffe70b8700 (LWP 22852) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    23 Thread 0x7fffe68b7700 (LWP 22853) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    24 Thread 0x7fffe60b6700 (LWP 22854) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    25 Thread 0x7fffe58b5700 (LWP 22855) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    26 Thread 0x7fffe50b4700 (LWP 22856) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    27 Thread 0x7fffe48b3700 (LWP 22857) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    28 Thread 0x7fffe40b2700 (LWP 22858) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    29 Thread 0x7fffe38b1700 (LWP 22859) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    30 Thread 0x7fffe30b0700 (LWP 22860) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    31 Thread 0x7fffe28af700 (LWP 22861) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    32 Thread 0x7fffe20ae700 (LWP 22862) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    33 Thread 0x7fffe18ad700 (LWP 22863) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    34 Thread 0x7fffe10ac700 (LWP 22864) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    35 Thread 0x7fffe08ab700 (LWP 22865) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    36 Thread 0x7fffe00aa700 (LWP 22866) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    37 Thread 0x7fffdf8a9700 (LWP 22867) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    38 Thread 0x7fffdf0a8700 (LWP 22868) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    39 Thread 0x7fffde8a7700 (LWP 22869) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    40 Thread 0x7fffde0a6700 (LWP 22870) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    41 Thread 0x7fffdd8a5700 (LWP 22871) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    42 Thread 0x7fffdc867700 (LWP 22872) "azuretest" 0x00007ffff75740ca in do_futex_wait.constprop () from /lib64/libpthread.so.0
    43 Thread 0x7fffdc066700 (LWP 22874) "azuretest" 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    (gdb) info 2
    Undefined info command: "2". Try "help info".
    (gdb) thread 2
    [Switching to thread 2 (Thread 0x7ffff10cc700 (LWP 22832))]
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    (gdb) bt
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
    Windows Azure Storage Client Library for C++ v0.2.0 #1 0x00007ffff6e50ead in boost::asio::detail::posix_event::wait<boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex >(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&) (this=0x652ce8, lock=...) at /usr/local/include/boost/asio/detail/posix_event.hpp:106
    Commit dependency libs and dlls #2 0x00007ffff6e3aaf7 in boost::asio::detail::task_io_service::do_run_one(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&, boost::asio::detail::task_io_service_thread_info&, boost::system::error_code const&) (this=0x652c90, lock=..., this_thread=..., ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:380
    Add unit tests #3 0x00007ffff6e3a519 in boost::asio::detail::task_io_service::run(boost::system::error_code&) (this=0x652c90, ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:149
    Windows Azure Storage Client Library for C++ v0.2.0 #4 0x00007ffff6e3ada7 in boost::asio::io_service::run() (this=0x7ffff7360808 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool+8>) at /usr/local/include/boost/asio/impl/io_service.ipp:59
    http proxy issue #5 0x00007ffff6f27def in (anonymous namespace)::threadpool_impl::thread_start(void*) (arg=0x7ffff7360800 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool>) at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:79
    Gert "Unhandled exception at 0x77562EEC in xxxxx.exe: Microsoft C++ exception: std::runtime_error at memory location 0x0107B60C." error #6 0x00007ffff6f27d23 in (anonymous namespace)::threadpool_impl::__lambda124::operator()() const (__closure=0x652f58)
    at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:64
    Azure Storage Client Library for C++ v0.3.0 #7 0x00007ffff6f28814 in boost::asio::detail::posix_thread::func<(anonymous namespace)::threadpool_impl::add_thread()::__lambda124>::run(void) (this=0x652f50)
    at /usr/local/include/boost/asio/detail/posix_thread.hpp:82
    Update NuSpec for 0.3 release #8 0x00007ffff6e3fca8 in boost::asio::detail::boost_asio_detail_posix_thread_function(void*) (arg=0x652f50)
    at /usr/local/include/boost/asio/detail/impl/posix_thread.ipp:64
    Debug Assertion Expression: _pFirstBlock == pHead #9 0x00007ffff756d744 in start_thread () at /lib64/libpthread.so.0
    Missing hash_linux.h header #10 0x00007ffff1e76aad in clone () at /lib64/libc.so.6
    (gdb) thread 3
    [Switching to thread 3 (Thread 0x7ffff08cb700 (LWP 22833))]
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    (gdb) bt
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
    Windows Azure Storage Client Library for C++ v0.2.0 #1 0x00007ffff6e50ead in boost::asio::detail::posix_event::wait<boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex >(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&) (this=0x652ce8, lock=...) at /usr/local/include/boost/asio/detail/posix_event.hpp:106
    Commit dependency libs and dlls #2 0x00007ffff6e3aaf7 in boost::asio::detail::task_io_service::do_run_one(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&, boost::asio::detail::task_io_service_thread_info&, boost::system::error_code const&) (this=0x652c90, lock=..., this_thread=..., ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:380
    Add unit tests #3 0x00007ffff6e3a519 in boost::asio::detail::task_io_service::run(boost::system::error_code&) (this=0x652c90, ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:149
    Windows Azure Storage Client Library for C++ v0.2.0 #4 0x00007ffff6e3ada7 in boost::asio::io_service::run() (this=0x7ffff7360808 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool+8>) at /usr/local/include/boost/asio/impl/io_service.ipp:59
    http proxy issue #5 0x00007ffff6f27def in (anonymous namespace)::threadpool_impl::thread_start(void*) (arg=0x7ffff7360800 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool>) at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:79
    Gert "Unhandled exception at 0x77562EEC in xxxxx.exe: Microsoft C++ exception: std::runtime_error at memory location 0x0107B60C." error #6 0x00007ffff6f27d23 in (anonymous namespace)::threadpool_impl::__lambda124::operator()() const (__closure=0x652d98)
    at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:64
    Azure Storage Client Library for C++ v0.3.0 #7 0x00007ffff6f28814 in boost::asio::detail::posix_thread::func<(anonymous namespace)::threadpool_impl::add_thread()::__lambda124>::run(void) (this=0x652d90)
    at /usr/local/include/boost/asio/detail/posix_thread.hpp:82
    Update NuSpec for 0.3 release #8 0x00007ffff6e3fca8 in boost::asio::detail::boost_asio_detail_posix_thread_function(void*) (arg=0x652d90)
    at /usr/local/include/boost/asio/detail/impl/posix_thread.ipp:64
    Debug Assertion Expression: _pFirstBlock == pHead #9 0x00007ffff756d744 in start_thread () at /lib64/libpthread.so.0
    Missing hash_linux.h header #10 0x00007ffff1e76aad in clone () at /lib64/libc.so.6
    (gdb) thread 4
    [Switching to thread 4 (Thread 0x7ffff00ca700 (LWP 22834))]
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    (gdb) bt
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
    Windows Azure Storage Client Library for C++ v0.2.0 #1 0x00007ffff6e50ead in boost::asio::detail::posix_event::wait<boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex >(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&) (this=0x652ce8, lock=...) at /usr/local/include/boost/asio/detail/posix_event.hpp:106
    Commit dependency libs and dlls #2 0x00007ffff6e3aaf7 in boost::asio::detail::task_io_service::do_run_one(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&, boost::asio::detail::task_io_service_thread_info&, boost::system::error_code const&) (this=0x652c90, lock=..., this_thread=..., ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:380
    Add unit tests #3 0x00007ffff6e3a519 in boost::asio::detail::task_io_service::run(boost::system::error_code&) (this=0x652c90, ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:149
    Windows Azure Storage Client Library for C++ v0.2.0 #4 0x00007ffff6e3ada7 in boost::asio::io_service::run() (this=0x7ffff7360808 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool+8>) at /usr/local/include/boost/asio/impl/io_service.ipp:59
    http proxy issue #5 0x00007ffff6f27def in (anonymous namespace)::threadpool_impl::thread_start(void*) (arg=0x7ffff7360800 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool>) at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:79
    Gert "Unhandled exception at 0x77562EEC in xxxxx.exe: Microsoft C++ exception: std::runtime_error at memory location 0x0107B60C." error #6 0x00007ffff6f27d23 in (anonymous namespace)::threadpool_impl::__lambda124::operator()() const (__closure=0x652dd8)
    at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:64
    Azure Storage Client Library for C++ v0.3.0 #7 0x00007ffff6f28814 in boost::asio::detail::posix_thread::func<(anonymous namespace)::threadpool_impl::add_thread()::__lambda124>::run(void) (this=0x652dd0)
    at /usr/local/include/boost/asio/detail/posix_thread.hpp:82
    Update NuSpec for 0.3 release #8 0x00007ffff6e3fca8 in boost::asio::detail::boost_asio_detail_posix_thread_function(void*) (arg=0x652dd0)
    at /usr/local/include/boost/asio/detail/impl/posix_thread.ipp:64
    Debug Assertion Expression: _pFirstBlock == pHead #9 0x00007ffff756d744 in start_thread () at /lib64/libpthread.so.0
    Missing hash_linux.h header #10 0x00007ffff1e76aad in clone () at /lib64/libc.so.6
    (gdb) thread 5
    [Switching to thread 5 (Thread 0x7fffef8c9700 (LWP 22835))]
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
    (gdb) bt
    #0 0x00007ffff75720bf in pthread_cond_wait@@GLIBC_2.3.2 () at /lib64/libpthread.so.0
    Windows Azure Storage Client Library for C++ v0.2.0 #1 0x00007ffff6e50ead in boost::asio::detail::posix_event::wait<boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex >(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&) (this=0x652ce8, lock=...) at /usr/local/include/boost/asio/detail/posix_event.hpp:106
    Commit dependency libs and dlls #2 0x00007ffff6e3aaf7 in boost::asio::detail::task_io_service::do_run_one(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&, boost::asio::detail::task_io_service_thread_info&, boost::system::error_code const&) (this=0x652c90, lock=..., this_thread=..., ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:380
    Add unit tests #3 0x00007ffff6e3a519 in boost::asio::detail::task_io_service::run(boost::system::error_code&) (this=0x652c90, ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:149
    Windows Azure Storage Client Library for C++ v0.2.0 #4 0x00007ffff6e3ada7 in boost::asio::io_service::run() (this=0x7ffff7360808 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool+8>) at /usr/local/include/boost/asio/impl/io_service.ipp:59
    http proxy issue #5 0x00007ffff6f27def in (anonymous namespace)::threadpool_impl::thread_start(void*) (arg=0x7ffff7360800 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool>) at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:79
    Gert "Unhandled exception at 0x77562EEC in xxxxx.exe: Microsoft C++ exception: std::runtime_error at memory location 0x0107B60C." error #6 0x00007ffff6f27d23 in (anonymous namespace)::threadpool_impl::__lambda124::operator()() const (__closure=0x652df8)
    at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:64
    Azure Storage Client Library for C++ v0.3.0 #7 0x00007ffff6f28814 in boost::asio::detail::posix_thread::func<(anonymous namespace)::threadpool_impl::add_thread()::__lambda124>::run(void) (this=0x652df0)
    at /usr/local/include/boost/asio/detail/posix_thread.hpp:82
    Update NuSpec for 0.3 release #8 0x00007ffff6e3fca8 in boost::asio::detail::boost_asio_detail_posix_thread_function(void*) (arg=0x652df0)
    at /usr/local/include/boost/asio/detail/impl/posix_thread.ipp:64
    Debug Assertion Expression: _pFirstBlock == pHead #9 0x00007ffff756d744 in start_thread () at /lib64/libpthread.so.0
    Missing hash_linux.h header #10 0x00007ffff1e76aad in clone () at /lib64/libc.so.6
    (gdb) thread 6
    [Switching to thread 6 (Thread 0x7fffef0c8700 (LWP 22836))]
    #0 0x00007ffff1e77083 in epoll_wait () from /lib64/libc.so.6
    (gdb) bt
    #0 0x00007ffff1e77083 in epoll_wait () at /lib64/libc.so.6
    Windows Azure Storage Client Library for C++ v0.2.0 #1 0x00007ffff6e395a1 in boost::asio::detail::epoll_reactor::run(bool, boost::asio::detail::op_queueboost::asio::detail::task_io_service_operation&) (this=0x677a50, block=true, ops=...) at /usr/local/include/boost/asio/detail/impl/epoll_reactor.ipp:392
    Commit dependency libs and dlls #2 0x00007ffff6e3aa32 in boost::asio::detail::task_io_service::do_run_one(boost::asio::detail::scoped_lockboost::asio::detail::posix_mutex&, boost::asio::detail::task_io_service_thread_info&, boost::system::error_code const&) (this=0x652c90, lock=..., this_thread=..., ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:356
    Add unit tests #3 0x00007ffff6e3a519 in boost::asio::detail::task_io_service::run(boost::system::error_code&) (this=0x652c90, ec=...)
    at /usr/local/include/boost/asio/detail/impl/task_io_service.ipp:149
    Windows Azure Storage Client Library for C++ v0.2.0 #4 0x00007ffff6e3ada7 in boost::asio::io_service::run() (this=0x7ffff7360808 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool+8>) at /usr/local/include/boost/asio/impl/io_service.ipp:59
    http proxy issue #5 0x00007ffff6f27def in (anonymous namespace)::threadpool_impl::thread_start(void*) (arg=0x7ffff7360800 <(anonymous namespace)::initialize_shared_threadpool(unsigned long)::uninit_threadpool>) at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:79
    Gert "Unhandled exception at 0x77562EEC in xxxxx.exe: Microsoft C++ exception: std::runtime_error at memory location 0x0107B60C." error #6 0x00007ffff6f27d23 in (anonymous namespace)::threadpool_impl::__lambda124::operator()() const (__closure=0x653b38)
    at /var/opt/td/home/yang/cpprestsdk-2.10.11/Release/src/pplx/threadpool.cpp:64
    Azure Storage Client Library for C++ v0.3.0 #7 0x00007ffff6f28814 in boost::asio::detail::posix_thread::func<(anonymous namespace)::threadpool_impl::add_thread()::__lambda124>::run(void) (this=0x653b30)
    at /usr/local/include/boost/asio/detail/posix_thread.hpp:82
    Update NuSpec for 0.3 release #8 0x00007ffff6e3fca8 in boost::asio::detail::boost_asio_detail_posix_thread_function(void*) (arg=0x653b30)
    at /usr/local/include/boost/asio/detail/impl/posix_thread.ipp:64
    Debug Assertion Expression: _pFirstBlock == pHead #9 0x00007ffff756d744 in start_thread () at /lib64/libpthread.so.0
    Missing hash_linux.h header #10 0x00007ffff1e76aad in clone () at /lib64/libc.so.6
    (gdb)

thanks,

Yang

@Jinming-Hu
Copy link
Member

@yxiang92128

Yes, when cpprestsdk initializes, it preallocates all threads in its thread pool.

Cpprestsdk does provide an API to change the pool size, but only before the thread pool is initialized. Unfortunately, currently azure-storage-cpp sdk will initialize the thread pool at loading time, prior to any attempts users can do to change the pool size. So practically, you can't.

But I still think this is a valid feature, why not open another issue requesting this feature and we'll evaluate it.

@Jinming-Hu
Copy link
Member

We're going to close this issue because of inactivity, feel free to reopen it if you have any further questions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants