-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[release-18.0] Throttler: Use tmclient pool for CheckThrottler tabletmanager RPC #15087
Merged
frouioui
merged 1 commit into
vitessio:release-18.0
from
planetscale:backport-14979-release-18.0
Jan 30, 2024
Merged
[release-18.0] Throttler: Use tmclient pool for CheckThrottler tabletmanager RPC #15087
frouioui
merged 1 commit into
vitessio:release-18.0
from
planetscale:backport-14979-release-18.0
Jan 30, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…tessio#14979) Signed-off-by: Matt Lord <[email protected]>
shlomi-noach
requested review from
ajm188,
GuptaManan100,
rohit-nayak-ps and
deepthi
as code owners
January 30, 2024 11:35
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
vitess-bot
bot
added
NeedsBackportReason
If backport labels have been applied to a PR, a justification is required
NeedsDescriptionUpdate
The description is not clear or comprehensive enough, and needs work
NeedsIssue
A linked issue is missing for this Pull Request
NeedsWebsiteDocsUpdate
What it says
labels
Jan 30, 2024
5 tasks
shlomi-noach
added
Backport
This is a backport
and removed
NeedsDescriptionUpdate
The description is not clear or comprehensive enough, and needs work
NeedsWebsiteDocsUpdate
What it says
NeedsIssue
A linked issue is missing for this Pull Request
NeedsBackportReason
If backport labels have been applied to a PR, a justification is required
labels
Jan 30, 2024
mattlord
approved these changes
Jan 30, 2024
Just FYI, I'm pretty sure that you can add the backport labels after merge now and the backport PRs will get auto created. |
this is not implemented yet :'( |
frouioui
approved these changes
Jan 30, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
release-18.0 backport of #14979
Description
When the tablet throttler is enabled in a keyspace, the tablets within each shard make very frequent
CheckThrottler
RPC calls between themselves after moving from http to gRPC in #13514.The initial implementation created a new gRPC connection and dialed the other tablet on each
CheckThrottler
RPC call. Because this RPC is made so frequently, however, this was not practical from a performance perspective (CPU and network overhead along with feature/input latency).In this PR we instead leverage the existing tabletmanagerclient pooling, each tabletmanagerclient having its own gRPC connection, so that we re-use existing connections and avoid the overhead of constantly creating and destroying them on each RPC which caused a lot of tcp connection churn and related overhead.
Related Issue(s)
Checklist