forked from ClickHouse/ClickHouse
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support limits and cancel for hive/hivecluster #1
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
lgbo-ustc
pushed a commit
that referenced
this pull request
Jul 4, 2022
This use-after-free can be reproduced with distributed queries. Also note, that this is not sumMappedArray() and friends (that previously called sumMap()) but Map combinator. You will find ASan report in details. <details> READ of size 8 at 0x62d00012d218 thread T186 (QueryPipelineEx) 2022.07.03 05:09:40.000234 [ 31956 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 1.23 GiB, peak 1.23 GiB, will set to 1.25 GiB (RSS), difference: 19.51 MiB 2022.07.03 05:09:41.000137 [ 31956 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 1.25 GiB, peak 1.25 GiB, will set to 1.26 GiB (RSS), difference: 3.76 MiB #0 0x1233a0d8 in DB::AggregateFunctionSumData<>::get() const build_docker/../src/AggregateFunctions/AggregateFunctionSum.h:245:16 #1 0x1233a0d8 in DB::AggregateFunctionSum<>::insertResultInto(char*, DB::IColumn&, DB::Arena*) const build_docker/../src/AggregateFunctions/AggregateFunctionSum.h:536:70 #2 0x1470f910 in DB::AggregateFunctionMap<char8_t>::insertResultInto() const build_docker/../src/AggregateFunctions/AggregateFunctionMap.h:236:26 #3 0x147110ce in DB::IAggregateFunctionHelper<>::insertResultIntoBatch() const build_docker/../src/AggregateFunctions/IAggregateFunction.h:618:53 #4 0x2c4269d7 in void DB::Aggregator::convertToBlockImplFinal<>() const build_docker/../src/Interpreters/Aggregator.cpp:1878:49 #5 0x2c403b9f in void DB::Aggregator::convertToBlockImpl<>() const build_docker/../src/Interpreters/Aggregator.cpp:1714:13 #6 0x2be09b53 in DB::Aggregator::prepareBlockAndFillSingleLevel() const::$_2::operator()() const build_docker/../src/Interpreters/Aggregator.cpp:2144:9 #7 0x2be09b53 in DB::Block DB::Aggregator::prepareBlockAndFill<>() const build_docker/../src/Interpreters/Aggregator.cpp:2000:5 #8 0x2be09b53 in DB::Aggregator::prepareBlockAndFillSingleLevel() const build_docker/../src/Interpreters/Aggregator.cpp:2150:12 #9 0x2be37de3 in DB::Aggregator::mergeBlocks() build_docker/../src/Interpreters/Aggregator.cpp:3032:17 #10 0x308c27f8 in DB::MergingAggregatedBucketTransform::transform() build_docker/../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:360:37 0x62d00012d218 is located 3608 bytes inside of 32768-byte region [0x62d00012c400,0x62d000134400) freed by thread T186 (QueryPipelineEx) here: #0 0xd701312 in free (/work1/azat/tmp/upstream/clickhouse-asan+0xd701312) (BuildId: b7977aef37e9f720) ... #8 0x2e3c22eb in DB::ColumnAggregateFunction::~ColumnAggregateFunction() build_docker/../src/Columns/ColumnAggregateFunction.cpp:89:1 ... #18 0xd9fcdd4 in std::__1::vector<DB::ColumnWithTypeAndName, std::__1::allocator<DB::ColumnWithTypeAndName> >::~vector() build_docker/../contrib/libcxx/include/vector:401:9 #19 0x2be373f4 in DB::Aggregator::mergeBlocks() build_docker/../contrib/libcxx/include/__memory/unique_ptr.h #20 0x308c27f8 in DB::MergingAggregatedBucketTransform::transform() build_docker/../src/Processors/Transforms/MergingAggregatedMemoryEfficientTransform.cpp:360:37 previously allocated by thread T186 (QueryPipelineEx) here: #0 0xd7015be in malloc (/work1/azat/tmp/upstream/clickhouse-asan+0xd7015be) (BuildId: b7977aef37e9f720) #1 0xd85190a in Allocator<false, false>::allocNoTrack(unsigned long, unsigned long) build_docker/../src/Common/Allocator.h:227:27 #2 0xd988d45 in Allocator<false, false>::alloc(unsigned long, unsigned long) build_docker/../src/Common/Allocator.h:96:16 #3 0xd988d45 in DB::Arena::MemoryChunk::MemoryChunk(unsigned long, DB::Arena::MemoryChunk*) build_docker/../src/Common/Arena.h:54:64 #4 0xd98904b in DB::Arena::addMemoryChunk(unsigned long) build_docker/../src/Common/Arena.h:122:20 #5 0xec9542c in DB::Arena::alignedAlloc(unsigned long, unsigned long) build_docker/../src/Common/Arena.h:171:13 #6 0x1470f123 in DB::AggregateFunctionMap<char8_t>::deserialize() const build_docker/../src/AggregateFunctions/AggregateFunctionMap.h:205:35 </details> P.S. Thanks to @den-crane for the reproducer. Fixes: ClickHouse#35359 (cc @den-crane @dongxiao-yang) Signed-off-by: Azat Khuzhin <[email protected]>
zhanglistar
pushed a commit
that referenced
this pull request
Nov 7, 2022
…verflow UBSAN report: $ UBSAN_SYMBOLIZER_PATH=/usr/bin/llvm-symbolizer-14 UBSAN_OPTIONS=print_stacktrace=1 ./unit_tests_dbms --gtest_filter=*Gorilla* ../src/Compression/tests/gtest_compressionCodec.cpp:1216:47: runtime error: signed integer overflow: 23 * 100000000 cannot be represented in type 'int' #0 0x14f67fd1 in auto (anonymous namespace)::$_6::operator()(int) const::'lambda'(auto)::operator()<int>(auto) const build_docker/../src/Compression/tests/gtest_compressionCodec.cpp:1216:47 #1 0x14f67fd1 in (anonymous namespace)::CodecTestSequence (anonymous namespace)::generateSeq<long, (anonymous namespace)::$_6::operator()(int) const::'lambda'(auto), int, int>((anonymous namespace)::$_6::operator()(int) const::'lambda'(auto), char const*, int, int) build_docker/../src/Compression/tests/gtest_compressionCodec.cpp:394:36 #2 0x14f67fd1 in auto (anonymous namespace)::GCompatibilityTestSequence<long>() build_docker/../src/Compression/tests/gtest_compressionCodec.cpp:1224:12 #3 0x14f3c7f3 in (anonymous namespace)::gtest_GorillaCodecTestCompatibility_EvalGenerator_() build_docker/../src/Compression/tests/gtest_compressionCodec.cpp:1227:1 #4 0x14f6bdb5 in testing::internal::ParameterizedTestSuiteInfo<(anonymous namespace)::CodecTestCompatibility>::RegisterTests() build_docker/../contrib/googletest/googletest/include/gtest/internal/gtest-param-util.h:553:45 #5 0x27d87988 in testing::internal::ParameterizedTestSuiteRegistry::RegisterTests() build_docker/../contrib/googletest/googletest/include/gtest/internal/gtest-param-util.h:726:24 #6 0x27d87988 in testing::internal::UnitTestImpl::RegisterParameterizedTests() build_docker/../contrib/googletest/googletest/src/gtest.cc:2805:34 #7 0x27d87988 in testing::internal::UnitTestImpl::PostFlagParsingInit() build_docker/../contrib/googletest/googletest/src/gtest.cc:5492:5 #8 0x27d9d002 in void testing::internal::InitGoogleTestImpl<char>(int*, char**) build_docker/../contrib/googletest/googletest/src/gtest.cc:6499:22 #9 0x14fd5495 in main build_docker/../src/Coordination/tests/gtest_coordination.cpp:2189:5 #10 0x7f8c29005209 (/lib/x86_64-linux-gnu/libc.so.6+0x29209) (BuildId: 71a7c7b97bc0b3e349a3d8640252655552082bf5) #11 0x7f8c290052bb in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x292bb) (BuildId: 71a7c7b97bc0b3e349a3d8640252655552082bf5) #12 0x14ce356d in _start (/work1/azat/tmp/42190/unit_tests_dbms+0x14ce356d) (BuildId: 482550e3f8d45f06e8c7f8147f427ee798c1f645) SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior ../src/Compression/tests/gtest_compressionCodec.cpp:1216:47 in Signed-off-by: Azat Khuzhin <[email protected]>
zhanglistar
pushed a commit
that referenced
this pull request
Nov 24, 2022
…reated ASAN report: Code: 586. DB::ErrnoException: Cannot create file: /src/.clickhouse_history, errno: 2, strerror: No such file or directory. (CANNOT_CREATE_FILE) ================================================================= ==1==ERROR: AddressSanitizer: heap-use-after-free on address 0x6240000208f0 at pc 0x000030d22ade bp 0x7ffff2ff3f70 sp 0x7ffff2ff3f68 READ of size 8 at 0x6240000208f0 thread T2 #0 0x30d22add in DB::ProcessList::insert() build_docker/../src/Interpreters/ProcessList.cpp:89:36 #1 0x31411018 in DB::executeQueryImpl() build_docker/../src/Interpreters/executeQuery.cpp:516:60 #2 0x3140e1ab in DB::executeQuery() build_docker/../src/Interpreters/executeQuery.cpp:1083:30 #3 0x3364391e in DB::LocalConnection::sendQuery() build_docker/../src/Client/LocalConnection.cpp:119:21 #4 0x3367bab0 in DB::Suggest::fetch() build_docker/../src/Client/Suggest.cpp:141:16 #5 0x336820eb in void DB::Suggest::load<DB::LocalConnection>()::'lambda'()::operator()() const build_docker/../src/Client/Suggest.cpp:118:17 0x6240000208f0 is located 2032 bytes inside of 7056-byte region [0x624000020100,0x624000021c90) freed by thread T0 here: #0 0xe381ef2 in operator delete(void*, unsigned long) (/wrk/clickhouse-asan+0xe381ef2) (BuildId: 6ea6d1a5d2d5a164f60f0fd8230936305bc8d9d0) #1 0x335509fe in DB::ClientBase::~ClientBase() build_docker/../src/Client/ClientBase.cpp:293:25 #2 0x1f809bd5 in mainEntryClickHouseLocal(int, char**) build_docker/../programs/local/LocalServer.cpp:804:5 #3 0xe3856ad in main build_docker/../programs/main.cpp:482:12 #4 0x7ffff7dc0082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 previously allocated by thread T0 here: #0 0xe38128d in operator new(unsigned long) (/wrk/clickhouse-asan+0xe38128d) (BuildId: 6ea6d1a5d2d5a164f60f0fd8230936305bc8d9d0) #1 0x2f34a7f3 in std::__1::__unique_if<DB::ContextSharedPart>::__unique_single std::__1::make_unique[abi:v15003]<DB::ContextSharedPart>() build_docker/../contrib/libcxx/include/__memory/unique_ptr.h:714:28 #2 0x2f34a7f3 in DB::Context::createShared() build_docker/../src/Interpreters/Context.cpp:603:32 #3 0x1f7f901d in DB::LocalServer::processConfig() build_docker/../programs/local/LocalServer.cpp:535:22 #4 0x1f7f4d92 in DB::LocalServer::main() build_docker/../programs/local/LocalServer.cpp:419:5 #5 0x3af24ffe in Poco::Util::Application::run() build_docker/../contrib/poco/Util/src/Application.cpp:334:8 #6 0x1f809bca in mainEntryClickHouseLocal(int, char**) build_docker/../programs/local/LocalServer.cpp:803:20 #7 0xe3856ad in main build_docker/../programs/main.cpp:482:12 #8 0x7ffff7dc0082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 Thread T2 created by T0 here: #0 0xe32fedc in pthread_create (/wrk/clickhouse-asan+0xe32fedc) (BuildId: 6ea6d1a5d2d5a164f60f0fd8230936305bc8d9d0) #1 0x336806df in std::__1::__libcpp_thread_create[abi:v15003](unsigned long*, void* (*)(void*), void*) build_docker/../contrib/libcxx/include/__threading_support:376:10 #2 0x336806df in std::__1::thread::thread<void DB::Suggest::load<DB::LocalConnection>()::'lambda'(), void>() build_docker/../contrib/libcxx/include/thread:311:16 #3 0x3367ff5b in void DB::Suggest::load<DB::LocalConnection>(std::__1::shared_ptr<DB::Context const>, DB::ConnectionParameters const&, int) build_docker/../src/Client/Suggest.cpp:110:22 #4 0x3357fee9 in DB::ClientBase::runInteractive() build_docker/../src/Client/ClientBase.cpp:2066:22 #5 0x1f7f5264 in DB::LocalServer::main() build_docker/../programs/local/LocalServer.cpp #6 0x3af24ffe in Poco::Util::Application::run() build_docker/../contrib/poco/Util/src/Application.cpp:334:8 #7 0x1f809bca in mainEntryClickHouseLocal(int, char**) build_docker/../programs/local/LocalServer.cpp:803:20 #8 0xe3856ad in main build_docker/../programs/main.cpp:482:12 #9 0x7ffff7dc0082 in __libc_start_main /build/glibc-SzIz7B/glibc-2.31/csu/../csu/libc-start.c:308:16 Signed-off-by: Azat Khuzhin <[email protected]>
zhanglistar
pushed a commit
that referenced
this pull request
Jan 29, 2023
…ceptors) Recently I noticed that clickhouse compiled with ASan does not work with newer glibc 2.36+, before I though that this was only about compiling with old but using new, however that was not correct, ASan simply does not work with glibc 2.36+. Here is a simple reproducer [1]: $ cat > test-asan.cpp <<EOL #include <pthread.h> int main() { // something broken in ASan in interceptor for __pthread_mutex_lock // and only since glibc 2.36, and for pthread_mutex_lock everything is OK pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; return __pthread_mutex_lock(&mutex); } EOL $ clang -g3 -o test-asan test-asan.cpp -fsanitize=address $ ./test-asan AddressSanitizer:DEADLYSIGNAL ================================================================= ==15659==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x000000000000 bp 0x7fffffffccb0 sp 0x7fffffffcb98 T0) ==15659==Hint: pc points to the zero page. ==15659==The signal is caused by a READ memory access. ==15659==Hint: address points to the zero page. #0 0x0 (<unknown module>) #1 0x7ffff7cda28f (/usr/lib/libc.so.6+0x2328f) (BuildId: 1e94beb079e278ac4f2c8bce1f53091548ea1584) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (<unknown module>) ==15659==ABORTING [1]: https://gist.github.com/azat/af073e57a248e04488b21068643f079e I've started observing glibc code, there was some changes in glibc, that moves pthread functions out from libpthread.so.0 into libc.so.6 (somewhere between 2.31 and 2.35), but the problem pops up only with 2.36, 2.35 works fine. After this I've looked into changes between 2.35 and 2.36, and found this patch [2] - "dlsym: Make RTLD_NEXT prefer default version definition [BZ ClickHouse#14932]", that fixes this bug [3]. [2]: https://sourceware.org/git/?p=glibc.git;a=commit;h=efa7936e4c91b1c260d03614bb26858fbb8a0204 [3]: https://sourceware.org/bugzilla/show_bug.cgi?id=14932 The problem with using DL_LOOKUP_RETURN_NEWEST flag for RTLD_NEXT is that it does not resolve hidden symbols (and __pthread_mutex_lock is indeed hidden). Here is a sample that will show the difference [4]: $ cat > test-dlsym.c <<EOL #define _GNU_SOURCE #include <dlfcn.h> #include <stdio.h> int main() { void *p = dlsym(RTLD_NEXT, "__pthread_mutex_lock"); printf("__pthread_mutex_lock: %p (via RTLD_NEXT)\n", p); return 0; } EOL # glibc 2.35: __pthread_mutex_lock: 0x7ffff7e27f70 (via RTLD_NEXT) # glibc 2.36: __pthread_mutex_lock: (nil) (via RTLD_NEXT) [4]: https://gist.github.com/azat/3b5f2ae6011bef2ae86392cea7789eb7 But ThreadFuzzer uses internal symbols to wrap pthread_mutex_lock/pthread_mutex_unlock, which are intercepted by ASan and this leads to NULL dereference. The fix was obvious - just use dlsym(RTLD_NEXT), however on older glibc's this leads to endless recursion (see commits in the code). But only for jemalloc [5], and even though sanitizers does not uses jemalloc the code of ThreadFuzzer is generic and I don't want to guard it with more preprocessors macros. [5]: https://gist.github.com/azat/588d9c72c1e70fc13ebe113197883aa2 So we have to use RTLD_NEXT only for ASan. There is also one more interesting issue, if you will compile with clang that itself had been compiled with newer libc (i.e. 2.36), you will get the following error: $ podman run --privileged -v $PWD/.cmake-asan/programs:/root/bin -e PATH=/bin:/root/bin -e --rm -it ubuntu-dev-v3 clickhouse ==1==ERROR: AddressSanitizer failed to allocate 0x0 (0) bytes of SetAlternateSignalStack (error code: 22) ... ==1==End of process memory map. AddressSanitizer: CHECK failed: sanitizer_common.cpp:53 "((0 && "unable to mmap")) != (0)" (0x0, 0x0) (tid=1) <empty stack> The problem is that since GLIBC_2.31, `SIGSTKSZ` is a call to `getconf(_SC_MINSIGSTKSZ)`, but older glibc does not have it, so `-1` will be returned and used as `SIGSTKSZ` instead. The workaround to disable alternative stack: $ podman run --privileged -v $PWD/.cmake-asan/programs:/root/bin -e PATH=/bin:/root/bin -e ASAN_OPTIONS=use_sigaltstack=0 --rm -it ubuntu-dev-v3 clickhouse client --version ClickHouse client version 22.13.1.1. Fixes: ClickHouse#43426 Signed-off-by: Azat Khuzhin <[email protected]>
taiyang-li
pushed a commit
that referenced
this pull request
Feb 27, 2023
* Prefix more typedefs in DB namespace with "Gin" * Fixed tests * Update SwapHelper.h * Move some code around (no other changes) * "segment file" --> "segment metadata file" * Cosmetics * Rename MergeTreeIndexGin.h/cpp to MergeTreeIndexInverted.h/cpp * Cosmetics * Remove superfluous check (the same is checked in MergeTreeIndices.cpp) * Use GinFilters typedef where possible * Fixing build * Suffix "GinFilter" --> "Inverted" * Fix ASan builds for glibc 2.36+ (use RTLD_NEXT for ThreadFuzzer interceptors) Recently I noticed that clickhouse compiled with ASan does not work with newer glibc 2.36+, before I though that this was only about compiling with old but using new, however that was not correct, ASan simply does not work with glibc 2.36+. Here is a simple reproducer [1]: $ cat > test-asan.cpp <<EOL #include <pthread.h> int main() { // something broken in ASan in interceptor for __pthread_mutex_lock // and only since glibc 2.36, and for pthread_mutex_lock everything is OK pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; return __pthread_mutex_lock(&mutex); } EOL $ clang -g3 -o test-asan test-asan.cpp -fsanitize=address $ ./test-asan AddressSanitizer:DEADLYSIGNAL ================================================================= ==15659==ERROR: AddressSanitizer: SEGV on unknown address 0x000000000000 (pc 0x000000000000 bp 0x7fffffffccb0 sp 0x7fffffffcb98 T0) ==15659==Hint: pc points to the zero page. ==15659==The signal is caused by a READ memory access. ==15659==Hint: address points to the zero page. #0 0x0 (<unknown module>) #1 0x7ffff7cda28f (/usr/lib/libc.so.6+0x2328f) (BuildId: 1e94beb079e278ac4f2c8bce1f53091548ea1584) AddressSanitizer can not provide additional info. SUMMARY: AddressSanitizer: SEGV (<unknown module>) ==15659==ABORTING [1]: https://gist.github.com/azat/af073e57a248e04488b21068643f079e I've started observing glibc code, there was some changes in glibc, that moves pthread functions out from libpthread.so.0 into libc.so.6 (somewhere between 2.31 and 2.35), but the problem pops up only with 2.36, 2.35 works fine. After this I've looked into changes between 2.35 and 2.36, and found this patch [2] - "dlsym: Make RTLD_NEXT prefer default version definition [BZ ClickHouse#14932]", that fixes this bug [3]. [2]: https://sourceware.org/git/?p=glibc.git;a=commit;h=efa7936e4c91b1c260d03614bb26858fbb8a0204 [3]: https://sourceware.org/bugzilla/show_bug.cgi?id=14932 The problem with using DL_LOOKUP_RETURN_NEWEST flag for RTLD_NEXT is that it does not resolve hidden symbols (and __pthread_mutex_lock is indeed hidden). Here is a sample that will show the difference [4]: $ cat > test-dlsym.c <<EOL #define _GNU_SOURCE #include <dlfcn.h> #include <stdio.h> int main() { void *p = dlsym(RTLD_NEXT, "__pthread_mutex_lock"); printf("__pthread_mutex_lock: %p (via RTLD_NEXT)\n", p); return 0; } EOL # glibc 2.35: __pthread_mutex_lock: 0x7ffff7e27f70 (via RTLD_NEXT) # glibc 2.36: __pthread_mutex_lock: (nil) (via RTLD_NEXT) [4]: https://gist.github.com/azat/3b5f2ae6011bef2ae86392cea7789eb7 But ThreadFuzzer uses internal symbols to wrap pthread_mutex_lock/pthread_mutex_unlock, which are intercepted by ASan and this leads to NULL dereference. The fix was obvious - just use dlsym(RTLD_NEXT), however on older glibc's this leads to endless recursion (see commits in the code). But only for jemalloc [5], and even though sanitizers does not uses jemalloc the code of ThreadFuzzer is generic and I don't want to guard it with more preprocessors macros. [5]: https://gist.github.com/azat/588d9c72c1e70fc13ebe113197883aa2 So we have to use RTLD_NEXT only for ASan. There is also one more interesting issue, if you will compile with clang that itself had been compiled with newer libc (i.e. 2.36), you will get the following error: $ podman run --privileged -v $PWD/.cmake-asan/programs:/root/bin -e PATH=/bin:/root/bin -e --rm -it ubuntu-dev-v3 clickhouse ==1==ERROR: AddressSanitizer failed to allocate 0x0 (0) bytes of SetAlternateSignalStack (error code: 22) ... ==1==End of process memory map. AddressSanitizer: CHECK failed: sanitizer_common.cpp:53 "((0 && "unable to mmap")) != (0)" (0x0, 0x0) (tid=1) <empty stack> The problem is that since GLIBC_2.31, `SIGSTKSZ` is a call to `getconf(_SC_MINSIGSTKSZ)`, but older glibc does not have it, so `-1` will be returned and used as `SIGSTKSZ` instead. The workaround to disable alternative stack: $ podman run --privileged -v $PWD/.cmake-asan/programs:/root/bin -e PATH=/bin:/root/bin -e ASAN_OPTIONS=use_sigaltstack=0 --rm -it ubuntu-dev-v3 clickhouse client --version ClickHouse client version 22.13.1.1. Fixes: ClickHouse#43426 Signed-off-by: Azat Khuzhin <[email protected]> * Initial inverted index docs * Fix * fix typos * Fix the case when merge-base does not show the oldest commit * Fix bad comparison * Fix typos * Don't duplicate writing guide, instead point to existing writing guide * update docs for async insert deduplication * Fixing build * fix typo * Update src/Functions/FunctionsConversion.h Co-authored-by: Alexander Gololobov <[email protected]> * Make a bit better * Add docs * Introduce non-throwing variants of hasToken * Document functions * Produce a null map of the correct size * Update BoundedReadBuffer.cpp * Better test * Review fixes * Fix aborts in arrow lib * Better comment * Add more retries to AST Fuzzer * Update docs/en/operations/settings/merge-tree-settings.md Co-authored-by: Dan Roscigno <[email protected]> * Update docs/en/operations/settings/merge-tree-settings.md Co-authored-by: Dan Roscigno <[email protected]> * Update docs/en/operations/settings/settings.md Co-authored-by: Dan Roscigno <[email protected]> * Update docs/en/operations/settings/settings.md Co-authored-by: Dan Roscigno <[email protected]> * Update docs/en/operations/settings/settings.md Co-authored-by: Dan Roscigno <[email protected]> * address comments * Fix possible deadlock with allow_asynchronous_read_from_io_pool_for_merge_tree in case of exception from ThreadPool::schedule * Fix crash when `ListObjects` request fails (ClickHouse#45371) * Fix schema inference in hdfsCluster * Cleanup PullingAsyncPipelineExecutor::cancel() Signed-off-by: Azat Khuzhin <[email protected]> * Catch exception on query cancellation Since we still want to join the thread, yes it will be done in dtor, but this looks better. Signed-off-by: Azat Khuzhin <[email protected]> * Fix possible (likely distributed) query hung Recently I saw the following, the client executed long distributed query and terminated the connection, and in this case query cancellation will be done from PullingAsyncPipelineExecutor dtor, but during cancellation one of nodes sent ECONNRESET, and this leads to an exception from PullingAsyncPipelineExecutor::cancel(), and this leads to a deadlock when multiple threads waits each others, because cancel() for LazyOutputFormat wasn't called. Here is as relevant portion of logs: 2023.01.04 08:26:09.236208 [ 37968 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Debug> executeQuery: (from 10.61.13.253:44266, user: default) TooLongDistributedQueryToPost ... 2023.01.04 08:26:09.262424 [ 37968 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Trace> MergeTreeInOrderSelectProcessor: Reading 1 ranges in order from part 9_330_538_18, approx. 61440 rows starting from 0 2023.01.04 08:26:09.266399 [ 26788 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Trace> Connection (s4.ch:9000): Connecting. Database: (not specified). User: default 2023.01.04 08:26:09.266849 [ 26788 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Trace> Connection (s4.ch:9000): Connected to ClickHouse server version 22.10.1. 2023.01.04 08:26:09.267165 [ 26788 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Debug> Connection (s4.ch:9000): Sent data for 2 scalars, total 2 rows in 3.1587e-05 sec., 62635 rows/sec., 68.00 B (2.03 MiB/sec.), compressed 0.4594594594594595 times to 148.00 B (4.41 MiB/sec.) 2023.01.04 08:39:13.047170 [ 37968 ] {f2ed6149-146d-4a3d-874a-b0b751c7b567} <Error> PullingAsyncPipelineExecutor: Code: 210. DB::NetException: Connection reset by peer, while writing to socket (10.7.142.115:9000). (NETWORK_ERROR), Stack trace (when copying this message, always include the lines below): 0. ./.build/./contrib/libcxx/include/exception:133: Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x1818234c in /usr/lib/debug/usr/bin/clickhouse.debug 1. ./.build/./src/Common/Exception.cpp:69: DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1004fbda in /usr/lib/debug/usr/bin/clickhouse.debug 2. ./.build/./src/Common/NetException.h:12: DB::WriteBufferFromPocoSocket::nextImpl() @ 0x14e352f3 in /usr/lib/debug/usr/bin/clickhouse.debug 3. ./.build/./src/IO/BufferBase.h:39: DB::Connection::sendCancel() @ 0x15c21e6b in /usr/lib/debug/usr/bin/clickhouse.debug 4. ./.build/./src/Client/MultiplexedConnections.cpp:0: DB::MultiplexedConnections::sendCancel() @ 0x15c4d5b7 in /usr/lib/debug/usr/bin/clickhouse.debug 5. ./.build/./src/QueryPipeline/RemoteQueryExecutor.cpp:627: DB::RemoteQueryExecutor::tryCancel(char const*, std::__1::unique_ptr<DB::RemoteQueryExecutorReadContext, std::__1::default_delete<DB::RemoteQueryExecutorReadContext> >*) @ 0x14446c09 in /usr/lib/debug/usr/bin/clickhouse.debug 6. ./.build/./contrib/libcxx/include/__iterator/wrap_iter.h:100: DB::ExecutingGraph::cancel() @ 0x15d2c0de in /usr/lib/debug/usr/bin/clickhouse.debug 7. ./.build/./contrib/libcxx/include/__memory/unique_ptr.h:300: DB::PullingAsyncPipelineExecutor::cancel() @ 0x15d32055 in /usr/lib/debug/usr/bin/clickhouse.debug 8. ./.build/./contrib/libcxx/include/__memory/unique_ptr.h:312: DB::PullingAsyncPipelineExecutor::~PullingAsyncPipelineExecutor() @ 0x15d31f4f in /usr/lib/debug/usr/bin/clickhouse.debug 9. ./.build/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::processOrdinaryQueryWithProcessors() @ 0x15cde919 in /usr/lib/debug/usr/bin/clickhouse.debug 10. ./.build/./src/Server/TCPHandler.cpp:0: DB::TCPHandler::runImpl() @ 0x15cd8554 in /usr/lib/debug/usr/bin/clickhouse.debug 11. ./.build/./src/Server/TCPHandler.cpp:1904: DB::TCPHandler::run() @ 0x15ce6479 in /usr/lib/debug/usr/bin/clickhouse.debug 12. ./.build/./contrib/poco/Net/src/TCPServerConnection.cpp:57: Poco::Net::TCPServerConnection::start() @ 0x18074f07 in /usr/lib/debug/usr/bin/clickhouse.debug 13. ./.build/./contrib/libcxx/include/__memory/unique_ptr.h:54: Poco::Net::TCPServerDispatcher::run() @ 0x180753ed in /usr/lib/debug/usr/bin/clickhouse.debug 14. ./.build/./contrib/poco/Foundation/src/ThreadPool.cpp:213: Poco::PooledThread::run() @ 0x181e3807 in /usr/lib/debug/usr/bin/clickhouse.debug 15. ./.build/./contrib/poco/Foundation/include/Poco/SharedPtr.h:156: Poco::ThreadImpl::runnableEntry(void*) @ 0x181e1483 in /usr/lib/debug/usr/bin/clickhouse.debug 16. ? @ 0x7ffff7e55fd4 in ? 17. ? @ 0x7ffff7ed666c in ? (version 22.10.1.1) And here is the state of the threads: <details> <summary>system.stack_trace</summary> ```sql SELECT arrayStringConcat(arrayMap(x -> demangle(addressToSymbol(x)), trace), '\n') AS sym FROM system.stack_trace WHERE query_id = 'f2ed6149-146d-4a3d-874a-b0b751c7b567' SETTINGS allow_introspection_functions=1 Row 1: ────── sym: pthread_cond_wait std::__1::condition_variable::wait(std::__1::unique_lock<std::__1::mutex>&) bool ConcurrentBoundedQueue<DB::Chunk>::emplaceImpl<DB::Chunk>(std::__1::optional<unsigned long>, DB::Chunk&&) DB::IOutputFormat::work() DB::ExecutionThreadContext::executeTask() DB::PipelineExecutor::executeStepImpl(unsigned long, std::__1::atomic<bool>*) Row 2: ────── sym: pthread_cond_wait Poco::EventImpl::waitImpl() DB::PipelineExecutor::joinThreads() DB::PipelineExecutor::executeImpl(unsigned long) DB::PipelineExecutor::execute(unsigned long) Row 3: ────── sym: pthread_cond_wait Poco::EventImpl::waitImpl() DB::PullingAsyncPipelineExecutor::Data::~Data() DB::PullingAsyncPipelineExecutor::~PullingAsyncPipelineExecutor() DB::TCPHandler::processOrdinaryQueryWithProcessors() DB::TCPHandler::runImpl() DB::TCPHandler::run() Poco::Net::TCPServerConnection::start() Poco::Net::TCPServerDispatcher::run() Poco::PooledThread::run() Poco::ThreadImpl::runnableEntry(void*) ``` </details> Signed-off-by: Azat Khuzhin <[email protected]> * Remove unnecessary getTotalRowCount function calls * Fix style * Use new copy s3 functions in S3ObjectStorage. * Forward declaration of ConcurrentBoundedQueue in ThreadStatus ThreadStatus is the header that recomplies almost all ClickHouse modules. Signed-off-by: Azat Khuzhin <[email protected]> * Revert "Merge pull request ClickHouse#44922 from azat/dist/async-INSERT-metrics" There are the following problems with this patch: - Looses files on exception - Existing current_batch.txt on startup leads to ENOENT error and hung of distributed sends without ATTACH/DETACH - Race between creating the queue for sending at table startup and INSERT, if it had been created from INSERT, then it will not be initialized from disk They were addressed in ClickHouse#45491, but it makes code more cmoplex and plus since, likely, the release is comming, it is better to revert the change. This reverts commit 94604f7, reversing changes made to 80f6a45. * Fix possible in-use table after DETACH Right now in case of DETACH/ATTACH there can be a window when after the table had been DETACH'ed someone will still use it, the common example here is MVs handling. It happens because TableExclusiveLockHolder does not guards the shard_ptr of the IStorage, and so if someone holds it, then it can use it. So if ATTACH will be done for this table then, you can have multiple instances of it. It is not possible for DROP, because before using a table, you should lock it and after table had been DROP'ed you cannot lock it anymore. So let's do the same for DETACH. Signed-off-by: Azat Khuzhin <[email protected]> * Docs: Fix weird formatting * Formatting fixup * Docs: Fix link to writing guide * Performance report: "Partial queries" --> "Backward-incompatible queries * Updated backup/restore status when concurrent backups & restores are not allowed Implementation: * Moved concurrent backup/restore check inside try-catch block which sets the status so that other nodes in cluster are aware of failures. * Renamed backup_uuid to restore_uuid in RestoreSettings. Testing: * Updated test test_backup_and_restore_on_cluster/test_disallow_concurrency to check for specific backup/restore id. * Update report.py * Fix stress test * fix race in destructor of ParallelParsingInputFormat * add fields to table system.formats * Moved settings inside backups section - Updated backup/restore status when concurrent backups & restores are not allowed * Fix a race between Distributed table creation and INSERT into it Initializing queues for pending on-disk files for async INSERT cannot be done after table had been attached and visible to user, since it initializes the per-table counter, that is used during INSERT. Now there is a window, when this counter is not initialized and it will start from the beginning, and this could lead to CANNOT_LINK error: Destination file /data/clickhouse/data/urls_v1/urls_in/shard6_replica1/13129817.bin is already exist and have different inode Signed-off-by: Azat Khuzhin <[email protected]> * Update type-conversion-functions.md * Improve logging for TeePopen.timeout exceeded * fix test * fixing join data was released * Update column.md * Update column.md * Update column.md * Fix special build * Better comment * Fix MSan build * Update test.py * Fix typo * Fix and add a test * remove JSON * Review fixes * Update src/Interpreters/GinFilter.h Co-authored-by: Sergei Trifonov <[email protected]> * Update src/Storages/MergeTree/MergeTreeIndexInverted.cpp Co-authored-by: Sergei Trifonov <[email protected]> * Update src/Storages/MergeTree/MergeTreeIndexInverted.cpp Co-authored-by: Sergei Trifonov <[email protected]> * Fix endian issue in transform function for s390x * Fix cache policy getter * Better formatting for exception messages (ClickHouse#45449) * save format string for NetException * format exceptions * format exceptions 2 * format exceptions 3 * format exceptions 4 * format exceptions 5 * format exceptions 6 * fix * format exceptions 7 * format exceptions 8 * Update MergeTreeIndexGin.cpp * Update AggregateFunctionMap.cpp * Update AggregateFunctionMap.cpp * fix * add docs for PR 33302 * impl (ClickHouse#45289) * Refine the solution * Update formats.md Google has a new website for Protocol Buffers. The old link expires on Jan 31, 2023 * Add DISTINCT to INTERSECT and EXCEPT * Fix the build with ENABLE_VECTORSCAN disabled * Add unit test for recursive checkpoints * Moved concurrency checks inside functions - Updated backup/restore status when concurrent backups & restores are not allowed * Update StorageReplicatedMergeTree.cpp * Revert "Merge pull request ClickHouse#45493 from azat/fix-detach" This reverts commit a182a6b, reversing changes made to c47a29a. * Update stress * Disable the optimization to avoid sort.xml perf test fail in other PRs * Ignore utf errors in clickhouse-test reportLogStats * WIP * Move fsync inside transaction callback in DataPartStorageOnDisk::rename() Otherwise, it is useless. Signed-off-by: Azat Khuzhin <[email protected]> * Do fsync all files at once for fetched parts to decrease latency For filessystems like ext4, fsync of one file will handle all operations before, so this can be pretty time consuming. And in case of you write multiple files in a loop, and at the end of each iteration sync each file, then during writing of this file there can be other operations in journal, and hence more work for fsync. Let's call fsync for all files at once instead, like MergedBlockOutputStream does. Hope that keeping all file buffers till the end, will not cause troubles (buffering and so forth). Signed-off-by: Azat Khuzhin <[email protected]> * Fsync all small files at once after mutation Everything else is handled in MergedBlockOutputStream::finalizePart() Signed-off-by: Azat Khuzhin <[email protected]> * Revert "Revert "Merge pull request ClickHouse#45493 from azat/fix-detach"" This reverts commit 9dc4f02. * fix * fix try_shared_lock() in SharedMutex and CancelableSharedMutex * Update merge-tree-settings.md * Use ProfileEvents::S3CopyObject again. * Remove FuseSumCountAggregatesVisitor * fix * Fix tests * Addressed review comments and renamed function to hasConcurrentBackups/Restores - Updated backup/restore status when concurrent backups & restores are not allowed * Update docs/en/operations/settings/merge-tree-settings.md Co-authored-by: Alexander Tokmakov <[email protected]> * Remove tests with optimize_fuse_sum_count_avg * Make retries in copyS3File() more correct. * Fix cleanup in tests test_replicated_merge_tree_s3_restore. * Docs: mini semicolon fix * Document start of week in function date_diff() * Fix report sending in case of FastTest failure * split Format settings out * add new settings for s3 and hdfs * fix note formatting * Review suggestions * Additional check in MergeTreeReadPool (ClickHouse#45515) * Check ranges * Check equality just in case * Check under ndebug * Fix typo * Update tests/ci/fast_test_check.py * Apply suggestions from code review * update for split of format settings * Typo: "Granulesis" --> "Granules" * Docs: fix docs of EXPLAIN PLAN indexes=1 * add PARTITION BY to s3 and hdfs docs * add PARTITION BY to file and url docs * Create mongodb.md * Update mongodb.md * Fix version in autogenerated_versions.txt * Added two metrics about memory usage in cgroup to asynchronous metrics (ClickHouse#45301) * Update version to 23.1.2.1 * Backport ClickHouse#45636 to 23.1: Trim refs/tags/ from GITHUB_TAG in release workflow * Backport ClickHouse#45603 to 23.1: Fix wiping sensitive info in logs * Backport ClickHouse#45630 to 23.1: Fix performance of short queries with `Array` columns * Backport ClickHouse#45686 to 23.1: Fix key description when encountering duplicate primary keys * Update version to 23.1.3.1 * Backport ClickHouse#45818 to 23.1: Get rid of progress timestamps in release publishing * Backport ClickHouse#45871 to 23.1: Fix ipv6 parser * ignore warning * fix problems * fix problems --------- Signed-off-by: Azat Khuzhin <[email protected]> Co-authored-by: Robert Schulze <[email protected]> Co-authored-by: Maksim Kita <[email protected]> Co-authored-by: Kseniia Sumarokova <[email protected]> Co-authored-by: Anton Popov <[email protected]> Co-authored-by: Maksim Kita <[email protected]> Co-authored-by: Nikolai Kochetov <[email protected]> Co-authored-by: Azat Khuzhin <[email protected]> Co-authored-by: kssenii <[email protected]> Co-authored-by: Sema Checherinda <[email protected]> Co-authored-by: Mikhail f. Shiryaev <[email protected]> Co-authored-by: Han Fei <[email protected]> Co-authored-by: Kruglov Pavel <[email protected]> Co-authored-by: Alexander Tokmakov <[email protected]> Co-authored-by: Alexander Gololobov <[email protected]> Co-authored-by: Antonio Andelic <[email protected]> Co-authored-by: avogar <[email protected]> Co-authored-by: Nikolay Degterinsky <[email protected]> Co-authored-by: ltrk2 <[email protected]> Co-authored-by: Nikita Mikhaylov <[email protected]> Co-authored-by: vdimir <[email protected]> Co-authored-by: robot-ch-test-poll4 <[email protected]> Co-authored-by: Dan Roscigno <[email protected]> Co-authored-by: Nikolai Kochetov <[email protected]> Co-authored-by: Vitaly Baranov <[email protected]> Co-authored-by: Vitaly Baranov <[email protected]> Co-authored-by: Nikolay Degterinsky <[email protected]> Co-authored-by: Smita Kulkarni <[email protected]> Co-authored-by: Sergei Trifonov <[email protected]> Co-authored-by: Denny Crane <[email protected]> Co-authored-by: Dale Mcdiarmid <[email protected]> Co-authored-by: HarryLeeIBM <[email protected]> Co-authored-by: Alexey Milovidov <[email protected]> Co-authored-by: Nikita Taranov <[email protected]> Co-authored-by: Rich Raposa <[email protected]> Co-authored-by: Igor Nikonov <[email protected]> Co-authored-by: SmitaRKulkarni <[email protected]> Co-authored-by: Igor Nikonov <[email protected]> Co-authored-by: robot-ch-test-poll1 <[email protected]> Co-authored-by: Dmitry Novik <[email protected]> Co-authored-by: sichenzhao <[email protected]> Co-authored-by: robot-clickhouse <[email protected]> Co-authored-by: alesapin <[email protected]>
lgbo-ustc
pushed a commit
that referenced
this pull request
May 16, 2023
Reordering of the static variables should be enough to fix this: [ RUN ] IOResourceStaticResourceManager.Prioritization ================================================================= ==13==ERROR: AddressSanitizer: stack-use-after-scope on address 0x7f7d8d621970 at pc 0x5636b80dcbb2 bp 0x7f7c48e47dd0 sp 0x7f7c48e47dc8 READ of size 8 at 0x7f7d8d621970 thread T3975 (ThreadPool) #0 0x5636b80dcbb1 in IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_1::operator()(long) const build_docker/./src/IO/Resource/tests/gtest_resource_manager_static.cpp:78:13 #1 0x5636b80dcbb1 in IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0::operator()() const build_docker/./src/IO/Resource/tests/gtest_resource_manager_static.cpp:95:21 #2 0x5636b80dcbb1 in decltype(std::declval<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&>()()) std::__1::__invoke[abi:v15000]<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23 #3 0x5636b80dcbb1 in decltype(auto) std::__1::__apply_tuple_impl[abi:v15000]<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&, std::__1::tuple<>&>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&, std::__1::tuple<>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/tuple:1789:1 #4 0x5636b80dcbb1 in decltype(auto) std::__1::apply[abi:v15000]<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&, std::__1::tuple<>&>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&, std::__1::tuple<>&) build_docker/./contrib/llvm-project/libcxx/include/tuple:1798:1 #5 0x5636b80dcbb1 in ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&&)::'lambda'()::operator()() build_docker/./src/Common/ThreadPool.h:228:13 #6 0x5636b80dcbb1 in decltype(std::declval<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0>()()) std::__1::__invoke[abi:v15000]<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&&)::'lambda'()&>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23 #7 0x5636b80dcbb1 in void std::__1::__invoke_void_return_wrapper<void, true>::__call<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&&)::'lambda'()&>(ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&&)::'lambda'()&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9 #8 0x5636b80dcbb1 in std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&&)::'lambda'(), void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12 #9 0x5636b80dcbb1 in void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0>(IOResourceStaticResourceManager_Prioritization_Test::TestBody()::$_0&&)::'lambda'(), void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16 #10 0x5636ea219310 in std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16 #11 0x5636ea219310 in std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12 #12 0x5636ea219310 in ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) build_docker/./src/Common/ThreadPool.cpp:416:13 #13 0x5636ea2258ac in void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()::operator()() const build_docker/./src/Common/ThreadPool.cpp:180:73 #14 0x5636ea2258ac in decltype(std::declval<void>()()) std::__1::__invoke[abi:v15000]<void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(void&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23 #15 0x5636ea2258ac in void std::__1::__thread_execute[abi:v15000]<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>(std::__1::tuple<void, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>&, std::__1::__tuple_indices<>) build_docker/./contrib/llvm-project/libcxx/include/thread:284:5 #16 0x5636ea2258ac in void* std::__1::__thread_proxy[abi:v15000]<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct>>, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, long, std::__1::optional<unsigned long>, bool)::'lambda0'()>>(void*) build_docker/./contrib/llvm-project/libcxx/include/thread:295:5 #17 0x7f7d8fcf1608 in start_thread (/lib/x86_64-linux-gnu/libpthread.so.0+0x8608) (BuildId: 7b4536f41cdaa5888408e82d0836e33dcf436466) #18 0x7f7d8fc16132 in __clone (/lib/x86_64-linux-gnu/libc.so.6+0x11f132) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee) Address 0x7f7d8d621970 is located in stack of thread T0 at offset 368 in frame #0 0x5636b80d9bef in IOResourceStaticResourceManager_Prioritization_Test::TestBody() build_docker/./src/IO/Resource/tests/gtest_resource_manager_static.cpp:46 This frame has 10 object(s): [32, 36) 'requests_per_thead' (line 48) [48, 208) 't' (line 49) [272, 296) 'ref.tmp' (line 51) [336, 352) 'last_priority' (line 74) [368, 376) 'check' (line 75) <== Memory access at offset 368 is inside this variable [400, 424) 'name' (line 83) [464, 512) 'ref.tmp13' (line 87) [544, 560) 'c' (line 101) [576, 600) 'ref.tmp32' (line 101) [640, 664) 'ref.tmp41' (line 102) Signed-off-by: Azat Khuzhin <[email protected]>
zhanglistar
pushed a commit
that referenced
this pull request
Jun 9, 2023
Fixed convertFieldToType case of converting Date and Date32 to DateTime64 Field
lgbo-ustc
pushed a commit
that referenced
this pull request
Jun 25, 2023
Hackish way of setting up timezone on the client
taiyang-li
pushed a commit
that referenced
this pull request
Mar 18, 2024
Update check-large-objects.sh to be language neutral
lgbo-ustc
pushed a commit
that referenced
this pull request
Apr 16, 2024
Fixes integration test_reload_certificate/test.py::test_first_than_second_cert --- E Exception: Sanitizer assert found for instance ================== E WARNING: ThreadSanitizer: data race (pid=1) E Write of size 8 at 0x7b2800025d30 by thread T2 (mutexes: write M0, write M1): E #0 free <null> (clickhouse+0x709a3e5) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #1 CRYPTO_free build_docker/./contrib/openssl/crypto/mem.c:282:5 (clickhouse+0x2015f8ea) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #2 EVP_PKEY_free build_docker/./contrib/openssl/crypto/evp/p_lib.c:1809:5 (clickhouse+0x2012a751) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #3 Poco::Crypto::EVPPKey::~EVPPKey() build_docker/./base/poco/Crypto/src/EVPPKey.cpp:121:17 (clickhouse+0x1d00ffa9) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #4 DB::CertificateReloader::Data::~Data() build_docker/./src/Server/CertificateReloader.h:71:12 (clickhouse+0x194fb42d) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #5 std::__1::default_delete<DB::CertificateReloader::Data const>::operator()[abi:v15000](DB::CertificateReloader::Data const*) const build_docker/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:48:5 (clickhouse+0x194fb42d) E #6 std::__1::__shared_ptr_pointer<DB::CertificateReloader::Data const*, std::__1::default_delete<DB::CertificateReloader::Data const>, std::__1::allocator<DB::CertificateReloader::Data const>>::__on_zero_shared() build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:263:5 (clickhouse+0x194fb42d) E #7 std::__1::__shared_count::__release_shared[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:174:9 (clickhouse+0x194fade0) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #8 std::__1::__shared_weak_count::__release_shared[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:215:27 (clickhouse+0x194fade0) E #9 std::__1::shared_ptr<DB::CertificateReloader::Data const>::~shared_ptr[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:702:23 (clickhouse+0x194fade0) E #10 std::__1::shared_ptr<DB::CertificateReloader::Data const>::operator=[abi:v15000](std::__1::shared_ptr<DB::CertificateReloader::Data const>&&) build_docker/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:723:9 (clickhouse+0x194fade0) E #11 MultiVersion<DB::CertificateReloader::Data>::set(std::__1::unique_ptr<DB::CertificateReloader::Data const, std::__1::default_delete<DB::CertificateReloader::Data const>>&&) build_docker/./src/Common/MultiVersion.h:76:25 (clickhouse+0x194fade0) E #12 DB::CertificateReloader::tryLoad(Poco::Util::AbstractConfiguration const&) build_docker/./src/Server/CertificateReloader.cpp:83:18 (clickhouse+0x194f94ca) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #13 DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6::operator()(Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool) const build_docker/./programs/server/Server.cpp:1546:45 (clickhouse+0xf384df7) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #14 decltype(std::declval<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6&>()(std::declval<Poco::AutoPtr<Poco::Util::AbstractConfiguration>>(), std::declval<bool>())) std::__1::__invoke[abi:v15000]<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6&, Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool>(DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6&, Poco::AutoPtr<Poco::Util::AbstractConfiguration>&&, bool&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23 (clickhouse+0xf3827a9) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #15 void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6&, Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool>(DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6&, Poco::AutoPtr<Poco::Util::AbstractConfiguration>&&, bool&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9 (clickhouse+0xf3827a9) E #16 std::__1::__function::__default_alloc_func<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6, void (Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool)>::operator()[abi:v15000](Poco::AutoPtr<Poco::Util::AbstractConfiguration>&&, bool&&) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12 (clickhouse+0xf3827a9) E #17 void std::__1::__function::__policy_invoker<void (Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool)>::__call_impl<std::__1::__function::__default_alloc_func<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_6, void (Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool)>>(std::__1::__function::__policy_storage const*, Poco::AutoPtr<Poco::Util::AbstractConfiguration>&&, bool) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16 (clickhouse+0xf3827a9) E #18 std::__1::__function::__policy_func<void (Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool)>::operator()[abi:v15000](Poco::AutoPtr<Poco::Util::AbstractConfiguration>&&, bool&&) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16 (clickhouse+0x19fd2cbe) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #19 std::__1::function<void (Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool)>::operator()(Poco::AutoPtr<Poco::Util::AbstractConfiguration>, bool) const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12 (clickhouse+0x19fd2cbe) E #20 DB::ConfigReloader::reloadIfNewer(bool, bool, bool, bool) build_docker/./src/Common/Config/ConfigReloader.cpp:150:13 (clickhouse+0x19fd2cbe) E #21 DB::ConfigReloader::reload() build_docker/./src/Common/Config/ConfigReloader.h:51:21 (clickhouse+0xf38767c) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #22 DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13::operator()() const build_docker/./programs/server/Server.cpp:1731:31 (clickhouse+0xf38767c) E #23 decltype(std::declval<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13&>()()) std::__1::__invoke[abi:v15000]<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13&>(DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:394:23 (clickhouse+0xf38767c) E #24 void std::__1::__invoke_void_return_wrapper<void, true>::__call<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13&>(DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13&) build_docker/./contrib/llvm-project/libcxx/include/__functional/invoke.h:479:9 (clickhouse+0xf38767c) E #25 std::__1::__function::__default_alloc_func<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13, void ()>::operator()[abi:v15000]() build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:235:12 (clickhouse+0xf38767c) E #26 void std::__1::__function::__policy_invoker<void ()>::__call_impl<std::__1::__function::__default_alloc_func<DB::Server::main(std::__1::vector<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&)::$_13, void ()>>(std::__1::__function::__policy_storage const*) build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:716:16 (clickhouse+0xf38767c) E #27 std::__1::__function::__policy_func<void ()>::operator()[abi:v15000]() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:848:16 (clickhouse+0x16907aa0) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #28 std::__1::function<void ()>::operator()() const build_docker/./contrib/llvm-project/libcxx/include/__functional/function.h:1187:12 (clickhouse+0x16907aa0) E #29 DB::Context::reloadConfig() const build_docker/./src/Interpreters/Context.cpp:4357:5 (clickhouse+0x16907aa0) E ClickHouse#30 DB::InterpreterSystemQuery::execute() build_docker/./src/Interpreters/InterpreterSystemQuery.cpp:577:29 (clickhouse+0x17e78c19) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#31 DB::executeQueryImpl(char const*, char const*, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) build_docker/./src/Interpreters/executeQuery.cpp:1195:40 (clickhouse+0x17e3e462) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#32 DB::executeQuery(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::shared_ptr<DB::Context>, DB::QueryFlags, DB::QueryProcessingStage::Enum) build_docker/./src/Interpreters/executeQuery.cpp:1374:26 (clickhouse+0x17e39837) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#33 DB::TCPHandler::runImpl() build_docker/./src/Server/TCPHandler.cpp:518:54 (clickhouse+0x195cc651) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#34 DB::TCPHandler::run() build_docker/./src/Server/TCPHandler.cpp:2329:9 (clickhouse+0x195e8707) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#35 Poco::Net::TCPServerConnection::start() build_docker/./base/poco/Net/src/TCPServerConnection.cpp:43:3 (clickhouse+0x1d00d942) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#36 Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:115:20 (clickhouse+0x1d00e1b1) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#37 Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14 (clickhouse+0x1d20f2e6) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#38 Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11 (clickhouse+0x1d20d5af) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E ClickHouse#39 Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27 (clickhouse+0x1d20ba69) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E E Previous atomic write of size 4 at 0x7b2800025d30 by thread T3 (mutexes: write M2): E #0 CRYPTO_DOWN_REF build_docker/./contrib/openssl/include/internal/refcount.h:51:12 (clickhouse+0x2012a6e6) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #1 EVP_PKEY_free build_docker/./contrib/openssl/crypto/evp/p_lib.c:1795:5 (clickhouse+0x2012a6e6) E #2 ssl_cert_clear_certs build_docker/./contrib/openssl/ssl/ssl_cert.c:246:9 (clickhouse+0x1ffafd37) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #3 ssl_cert_free build_docker/./contrib/openssl/ssl/ssl_cert.c:277:5 (clickhouse+0x1ffafd37) E #4 ossl_ssl_connection_free build_docker/./contrib/openssl/ssl/ssl_lib.c:1458:5 (clickhouse+0x1ffba6af) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #5 SSL_free build_docker/./contrib/openssl/ssl/ssl_lib.c:1417:9 (clickhouse+0x1ffb920e) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #6 Poco::Net::SecureSocketImpl::reset() build_docker/./base/poco/NetSSL_OpenSSL/src/SecureSocketImpl.cpp:583:3 (clickhouse+0x1cfaac60) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #7 Poco::Net::SecureSocketImpl::~SecureSocketImpl() build_docker/./base/poco/NetSSL_OpenSSL/src/SecureSocketImpl.cpp:80:3 (clickhouse+0x1cfaac60) E #8 Poco::Net::SecureStreamSocketImpl::~SecureStreamSocketImpl() build_docker/./base/poco/NetSSL_OpenSSL/src/SecureStreamSocketImpl.cpp:52:1 (clickhouse+0x1cfb15dd) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #9 Poco::Net::SecureStreamSocketImpl::~SecureStreamSocketImpl() build_docker/./base/poco/NetSSL_OpenSSL/src/SecureStreamSocketImpl.cpp:43:1 (clickhouse+0x1cfb15dd) E #10 Poco::RefCountedObject::release() const build_docker/./base/poco/Foundation/include/Poco/RefCountedObject.h:86:13 (clickhouse+0x1cffc81e) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #11 Poco::Net::Socket::~Socket() build_docker/./base/poco/Net/src/Socket.cpp:68:10 (clickhouse+0x1cffc81e) E #12 Poco::Net::StreamSocket::~StreamSocket() build_docker/./base/poco/Net/src/StreamSocket.cpp:63:1 (clickhouse+0x1d009c39) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #13 Poco::Net::TCPConnectionNotification::~TCPConnectionNotification() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:43:2 (clickhouse+0x1d00ef50) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #14 Poco::Net::TCPConnectionNotification::~TCPConnectionNotification() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:42:2 (clickhouse+0x1d00ef50) E #15 Poco::RefCountedObject::release() const build_docker/./base/poco/Foundation/include/Poco/RefCountedObject.h:86:13 (clickhouse+0x1d00e203) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #16 Poco::AutoPtr<Poco::Notification>::~AutoPtr() build_docker/./base/poco/Foundation/include/Poco/AutoPtr.h:91:19 (clickhouse+0x1d00e203) E #17 Poco::Net::TCPServerDispatcher::run() build_docker/./base/poco/Net/src/TCPServerDispatcher.cpp:122:3 (clickhouse+0x1d00e203) E #18 Poco::PooledThread::run() build_docker/./base/poco/Foundation/src/ThreadPool.cpp:188:14 (clickhouse+0x1d20f2e6) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #19 Poco::(anonymous namespace)::RunnableHolder::run() build_docker/./base/poco/Foundation/src/Thread.cpp:45:11 (clickhouse+0x1d20d5af) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E #20 Poco::ThreadImpl::runnableEntry(void*) build_docker/./base/poco/Foundation/src/Thread_POSIX.cpp:335:27 (clickhouse+0x1d20ba69) (BuildId: 706d92b17db171493f293d517643f726ee1b7b1e) E
lgbo-ustc
pushed a commit
that referenced
this pull request
Apr 16, 2024
This commit adresses below leaksan find. I tried to reproduce this locally but 02802_clickhouse_disks_s3_copy.sh but couldn't. clickhouse-disks did not do anything useful (it would not even print logging), neither with a standard build nor with a leaksan build. Could not find further documentation of it or even what this tool is supposed to do, perhaps it is just for internal use. Also, - line numbers in the leaksan report were partially missing, - I am not really sure how Sha256HMACOpenSSLImpl::Calculate is calling into hmac_init (there must be some sort of static initialization somewhere but I did not find it), and - my fix is in a weird place due to other restrictions (see the commit in the aws-sdk-cpp contrib repo). The chance that this fix fixes the leak are low. If it doesn't work, add "# Tag no-asan" + a comment to 02802_clickhouse_disks_s3_copy.sh and don't worry further. EDIT: The commit fixes the issue, everything is good. https://s3.amazonaws.com/clickhouse-test-reports/59870/b452e3d1ab87b8cc5810693aeea28f69ad28d671/stateless_tests__asan__[3_4].html 2024-03-19 08:34:03 ================================================================= 2024-03-19 08:34:03 ==13149==ERROR: LeakSanitizer: detected memory leaks 2024-03-19 08:34:03 2024-03-19 08:34:03 Direct leak of 904 byte(s) in 1 object(s) allocated from: 2024-03-19 08:34:03 #0 0x55f9cb5a18ee in malloc (/usr/bin/clickhouse+0xa9a48ee) (BuildId: b3766b865d6580f6dcba75acf37673d4aeedc2b6) 2024-03-19 08:34:03 #1 0x55fa01e34070 in CRYPTO_malloc build_docker/./contrib/openssl/crypto/mem.c:202:11 2024-03-19 08:34:03 #2 0x55fa01e34070 in CRYPTO_zalloc build_docker/./contrib/openssl/crypto/mem.c:222:11 2024-03-19 08:34:03 #3 0x55fa01d6dcca in ossl_err_get_state_int build_docker/./contrib/openssl/crypto/err/err.c:691:17 2024-03-19 08:34:03 #4 0x55fa01d71748 in ERR_set_mark build_docker/./contrib/openssl/crypto/err/err_mark.c:19:10 2024-03-19 08:34:03 #5 0x55fa01f4735b in ossl_prov_digest_load_from_params build_docker/./contrib/openssl/providers/common/provider_util.c:194:5 2024-03-19 08:34:03 #6 0x55fa01ff467a in hmac_set_ctx_params build_docker/./contrib/openssl/providers/implementations/macs/hmac_prov.c:307:10 2024-03-19 08:34:03 #7 0x55fa01ff3ef2 in hmac_init build_docker/./contrib/openssl/providers/implementations/macs/hmac_prov.c:169:37 2024-03-19 08:34:03 #8 0x55f9fb83c75b in Aws::Utils::Crypto::Sha256HMACOpenSSLImpl::Calculate(Aws::Utils::Array<unsigned char> const&, Aws::Utils::Array<unsigned char> const&) (/usr/bin/clickhouse+0x3ac3f75b) (BuildId: b3766b865d6580f6dcba75acf37673d4aeedc2b6) 2024-03-19 08:34:03 #9 0x55f9fb82ebeb in Aws::Utils::Crypto::Sha256HMAC::Calculate(Aws::Utils::Array<unsigned char> const&, Aws::Utils::Array<unsigned char> const&) (/usr/bin/clickhouse+0x3ac31beb) (BuildId: b3766b865d6580f6dcba75acf37673d4aeedc2b6) 2024-03-19 08:34:03 #10 0x55f9fb6d5afd in Aws::Client::AWSAuthV4Signer::ComputeHash(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) const (/usr/bin/clickhouse+0x3aad8afd) (BuildId: b3766b865d6580f6dcba75acf37673d4aeedc2b6) 2024-03-19 08:34:03 #11 0x55f9fb6e0bad in Aws::Client::AWSAuthV4Signer::SignRequest(Aws::Http::HttpRequest&, char const*, char const*, bool) const (/usr/bin/clickhouse+0x3aae3bad) (BuildId: b3766b865d6580f6dcba75acf37673d4aeedc2b6) 2024-03-19 08:34:03 #12 0x55f9fb72651d in bool smithy::components::tracing::TracingUtils::MakeCallWithTiming<bool>(std::__1::function<bool ()>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&, smithy::components::tracing::Meter const&, std::__1::map<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::less<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>, std::__1::allocator<std::__1::pair<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>>>&&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) (/usr/bin/clickhouse+0x3ab2951d) (BuildId: b3766b865d6580f6dcba75acf37673d4aeedc2b6) 2024-03-19 08:34:03 #13 0x55f9fb70adcb in Aws::Client::AWSClient::AttemptOneRequest(std::__1::shared_ptr<Aws::Http::HttpRequest> const&, Aws::AmazonWebServiceRequest const&, char const*, char const*, char const*) const (/usr/bin/clickhouse+0x3ab0ddcb) (BuildId: b3766b865d6580f6dcba75acf37673d4aeedc2b6) 2024-03-19 08:34:03 #14 0x55f9fb6fdb17 in Aws::Client::AWSClient::AttemptExhaustively(Aws::Http::URI const&, Aws::AmazonWebServiceRequest const&, Aws::Http::HttpMethod, char const*, char const*, char cons
lgbo-ustc
pushed a commit
that referenced
this pull request
Nov 12, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Changelog category (leave one):
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
Support limits and cancel for hive/hivecluster.
Limits:
Cancel: