Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logging: fix race condition in changing the logger #891

Closed
wants to merge 3 commits into from

Conversation

grrtrr
Copy link

@grrtrr grrtrr commented Mar 12, 2022

This race condition was found via clang TSAN analyzer.

aws_logger_set/get operate on a global variable, but the operation on the shared global variable is not thread-safe.

Use a mutex to avoid the race condition.

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

This condition was found via clang TSAN analyzer.

aws_logger_set/get operate on a global variable, but the operation on
the shared global variable is not thread-safe.

Use a mutex to avoid the race condition.
result = s_root_logger_ptr;
aws_mutex_unlock(&s_root_logger_lock);

return result;
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This returns a pointer to the aws_logger(). The pointer can still access the underlying data in such a way that introduces a data race.

struct aws_logger * logger = aws_logger_get();
logger->foo = xx;

Since this can still race, AWS needs to provide a better solution here, whereby the API is changed to return a copy.

@waahm7
Copy link
Contributor

waahm7 commented Dec 30, 2022

Hi Gerrit,

Thank you for the contribution. This API is not thread safe and must only be called once. Please feel free to reopen the PR if there is anything else.

* Sets the aws logger used globally across the process. Not thread-safe. Must only be called once.

@grrtrr
Copy link
Author

grrtrr commented Dec 31, 2022

This API is not thread safe and must only be called once. Please feel free to reopen the PR if there is anything else.

This is a very old PR. The condition occurred when Aws::InitAPI and Aws::ShutdownAPI of aws-sdk-cpp were called from different threads. We ran into a similar problem (memory leak) with that in aws/s2n-tls#3525.

To avoid that problem, we have been using a dedicated initializer thread, which takes care of calling Aws::InitAPI and Aws::ShutdownAPI (details are in aws/s2n-tls#3530 (comment)). It may be that this also fixed the race condition we experienced when using aws-c-common via the aws-sdk-cpp.

Should the problem re-occur, I will create an issue.

@grrtrr grrtrr deleted the fix_logging_TSAN_warning branch December 31, 2022 16:56
grrtrr added a commit to grrtrr/aws-sdk-cpp that referenced this pull request Jan 28, 2023
…be called from the same thread

Running InitAPI() and ShutdownAPI() from different threads can cause subtle problems, see e.g.
- aws/s2n-tls#3525
- awslabs/aws-c-common#891

Add a note for users to use the same thread.
SergeyRyabinin pushed a commit to aws/aws-sdk-cpp that referenced this pull request Jan 30, 2023
…be called from the same thread

Running InitAPI() and ShutdownAPI() from different threads can cause subtle problems, see e.g.
- aws/s2n-tls#3525
- awslabs/aws-c-common#891

Add a note for users to use the same thread.
jmklix pushed a commit to aws/aws-sdk-cpp that referenced this pull request Aug 11, 2023
…be called from the same thread

Running InitAPI() and ShutdownAPI() from different threads can cause subtle problems, see e.g.
- aws/s2n-tls#3525
- awslabs/aws-c-common#891

Add a note for users to use the same thread.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants