Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add postings list cache to M3DB index #1370

Merged
merged 5 commits into from
Feb 14, 2019
Merged

Add postings list cache to M3DB index #1370

merged 5 commits into from
Feb 14, 2019

Conversation

richardartoul
Copy link
Contributor

No description provided.

@codecov
Copy link

codecov bot commented Feb 11, 2019

Codecov Report

Merging #1370 into master will increase coverage by 0.1%.
The diff coverage is 93.3%.

Impacted file tree graph

@@           Coverage Diff            @@
##           master   #1370     +/-   ##
========================================
+ Coverage    70.6%   70.7%   +0.1%     
========================================
  Files         823     826      +3     
  Lines       70369   70663    +294     
========================================
+ Hits        49692   49973    +281     
- Misses      17455   17459      +4     
- Partials     3222    3231      +9
Flag Coverage Δ
#aggregator 82.3% <ø> (ø) ⬆️
#cluster 85.9% <ø> (ø) ⬆️
#collector 63.7% <ø> (ø) ⬆️
#dbnode 81.1% <93.3%> (+0.1%) ⬆️
#m3em 73.2% <ø> (ø) ⬆️
#m3ninx 74.1% <ø> (-0.1%) ⬇️
#m3nsch 51.1% <ø> (ø) ⬆️
#metrics 17.6% <ø> (ø) ⬆️
#msg 74.9% <ø> (-0.3%) ⬇️
#query 64.4% <ø> (ø) ⬆️
#x 76% <ø> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4f3fd6e...c2b7cd3. Read the comment docs.

@codecov
Copy link

codecov bot commented Feb 11, 2019

Codecov Report

Merging #1370 into master will increase coverage by <.1%.
The diff coverage is 92.6%.

Impacted file tree graph

@@           Coverage Diff            @@
##           master   #1370     +/-   ##
========================================
+ Coverage    70.7%   70.7%   +<.1%     
========================================
  Files         823     826      +3     
  Lines       70484   70774    +290     
========================================
+ Hits        49840   50081    +241     
- Misses      17415   17449     +34     
- Partials     3229    3244     +15
Flag Coverage Δ
#aggregator 82.3% <ø> (-0.1%) ⬇️
#cluster 85.9% <ø> (ø) ⬆️
#collector 63.7% <ø> (ø) ⬆️
#dbnode 81% <92.6%> (ø) ⬆️
#m3em 73.2% <ø> (ø) ⬆️
#m3ninx 74.1% <ø> (-0.1%) ⬇️
#m3nsch 51.1% <ø> (ø) ⬆️
#metrics 17.6% <ø> (ø) ⬆️
#msg 74.9% <ø> (ø) ⬆️
#query 64.7% <ø> (ø) ⬆️
#x 76% <ø> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 7450997...6957750. Read the comment docs.

pattern string,
patternType PatternType,
) (postings.List, bool) {
// No RLock because a Get() operation mutates the LRU.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the contention on this thing like? could do a bunch of things to make it less expensive (e.g. approximate the lru, sample updates, etc)

Fine to do this as a follow up too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't show up in flame graphs at all, and at the load we're currently handling I don't think its highly contended (100s of queries/s at peak, maybe looking back 100 blocks each is 10,000 locks per second which isn't much). Would rather keep implementation simple (already kind of sucks I had to specialize it) until it becomes an issue which I don't think will happen until we have 1-2 orders of magnitude improvement in general query performance

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't show up in the cpu flamegraphs necessarily - did you try the blocking?.

mainly curious if this is the cause of latency spikes you mentioned but we can investigate this separately. don't want to block the PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah...no I did not check blocking. I will get a blocking profile next time I do benchmarking and see if this is an issue. My guess is that its not the root cause of the latency spikes but I'd love to be wrong as this would be an easy fix lol

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an additional bullet point to #1365 to track

Copy link
Collaborator

@robskillington robskillington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

// NoFinalize marks the Results such that a subsequent call to Finalize() will
// be a no-op and will not return the object to the pool or release any of its
// resources.
NoFinalize()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is only used in tests afaict, can you remove it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

synced with Rob offline, since its already written and tested gonna leave it in so we can use for the query cache

@richardartoul richardartoul merged commit 3e7b6a1 into master Feb 14, 2019
@richardartoul richardartoul deleted the ra/query-cache branch February 14, 2019 18:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants