-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receiver: cache matchers for series calls #7353
base: main
Are you sure you want to change the base?
Receiver: cache matchers for series calls #7353
Conversation
598b480
to
d56e024
Compare
0528b9c
to
a58508d
Compare
34e4852
to
3f852a5
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The results indicate that the "store-proxy-cache-matchers" branch considerably outperforms the "main" branch in all observed aspects of the BenchmarkProxySeriesRegex function. It is roughly 10 times faster regarding execution time while using about 9 times less memory and making about 4 times fewer allocations per operation. These improvements suggest significant optimizations in the regex handling or related data processing in the "store-proxy-cache-matchers" branch compared to the "main" branch
Was this AI generated? 😄
pkg/store/storepb/matcher_cache.go
Outdated
|
||
func (c *MatchersCache) GetOrSet(key LabelMatcher, newItem NewItemFunc) (*labels.Matcher, error) { | ||
c.metrics.requestsTotal.Inc() | ||
if item, ok := c.cache.Get(key); ok { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest using singleflight
here to reduce allocations even more
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Did that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kindly as from Cortex: :D
Is it possible to make this interface receive the Prometheus types, instead thanos ones, so we can reuse the same implementation on cortex?
Ex:
GetOrSet(t labels.MatchType, n, v string, newItem NewItemFunc) (*labels.Matcher, error)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alanprot could you link where in Cortex you would use this? I introduced an interface now, which prompb.LabelMatcher implements and made the storepb.LabelMatcher implement it as well. Let me know if that is enough for Cortex to reuse the code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok..
I think that works!
But i think the interface definition should be in the storecache package? Other than that i think it would work just fine for us!
@@ -973,6 +986,8 @@ func (rc *receiveConfig) registerFlag(cmd extkingpin.FlagClause) { | |||
"about order."). | |||
Default("false").Hidden().BoolVar(&rc.allowOutOfOrderUpload) | |||
|
|||
cmd.Flag("matcher-cache-size", "The size of the cache used for matching against external labels. Using 0 disables caching.").Default("0").IntVar(&rc.matcherCacheSize) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add this to other components as well like Thanos Store?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can do it, I fear making the PR hard to review though.
pkg/store/storepb/matcher_cache.go
Outdated
} | ||
} | ||
|
||
func NewMatchersCache(opts ...MatcherCacheOption) (*MatchersCache, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can just use pkg/cache/inmemory.go
? It's another LRU implementation that already exists in the tree.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would need to make it generic first, no? Or you mean adding this LRU to also be stored there? Also, I feel like this would introduce the need for the user to configure the cache via YAML configuration in the receiver for example, which would get quite complex.
Hi @pedro-stanaka, do you plan to continue this PR? If you are busy with something else and not planning to come back to this PR, can we create a new PR to add this cache? |
I can take a stab at finishing this up this week probably. Will put on my TODO. |
f7b7697
to
813b5fe
Compare
@yeya24 @GiedriusS please take a look at the current version. Some stuff I think might be good doing, but not sure:
|
f493eb7
to
3a10cf1
Compare
pkg/store/storepb/matcher_cache.go
Outdated
@@ -0,0 +1,150 @@ | |||
// Copyright (c) The Thanos Authors. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move this code out of storepb
package? storepb
sounds more related to the proto itself but this matcher cache can be more generic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved it to the storecache package, which seems a generic cache package.
pkg/store/storepb/matcher_cache.go
Outdated
type MatchersCache interface { | ||
// GetOrSet retrieves a matcher from cache or creates and stores it if not present. | ||
// If the matcher is not in cache, it uses the provided newItem function to create it. | ||
GetOrSet(key LabelMatcher, newItem NewItemFunc) (*labels.Matcher, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here. Can we take prometheus matcher as input key?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we want to convert to a prometheus matcher, I dont see the reason to use it as key. I will create a middle-term matcher that can be represented in a more neutral way. Let's see if with that we can use the cache in Cortex as well.
417104d
to
b2b65a2
Compare
b2b65a2
to
c771511
Compare
We have tried caching matchers before with a time-based expiration cache, this time we are trying with LRU cache. We saw some of our receivers busy with compiling regexes and with high CPU usage, similar to the profile of the benchmark I added here: * Adding matcher cache for method `MatchersToPromMatchers` and a new version which uses the cache. * The main change is in `matchesExternalLabels` function which now receives a cache instance. adding matcher cache and refactor matchers Co-authored-by: Andre Branchizio <[email protected]> Signed-off-by: Pedro Tanaka <[email protected]> Using the cache in proxy and tsdb stores (only receiver) Signed-off-by: Pedro Tanaka <[email protected]> fixing problem with deep equality Signed-off-by: Pedro Tanaka <[email protected]> adding some docs Signed-off-by: Pedro Tanaka <[email protected]> Adding benchmark Signed-off-by: Pedro Tanaka <[email protected]> undo unecessary changes Signed-off-by: Pedro Tanaka <[email protected]> Adjusting metric names Signed-off-by: Pedro Tanaka <[email protected]> adding changelog Signed-off-by: Pedro Tanaka <[email protected]> wiring changes to the receiver Signed-off-by: Pedro Tanaka <[email protected]> Fixing linting Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
Signed-off-by: Pedro Tanaka <[email protected]>
77e479f
to
314b4c6
Compare
Summary
We have tried caching matchers before with a time-based expiration cache, this time we are trying with LRU cache.
We saw some of our receivers busy with compiling regexes and with high CPU usage, similar to the profile of the benchmark I added here:
Benchmark results
Expand!
The results indicate that the "store-proxy-cache-matchers" branch considerably outperforms the "main" branch in all observed aspects of the BenchmarkProxySeriesRegex function. It is roughly 10 times faster regarding execution time while using about 9 times less memory and making about 4 times fewer allocations per operation. These improvements suggest significant optimizations in the regex handling or related data processing in the "store-proxy-cache-matchers" branch compared to the "main" branch
Changes
MatchersToPromMatchers
and a new version which uses the cache.matchesExternalLabels
function which now receives a cache instance.Verification