-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compaction: incorporate L0 size into compaction priority #1624
Conversation
Created #1625 too |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
, while I'm down to prioritize this merge to make things easy in the short term, it'd be a good idea to file a follow-up issue to benchmark the impacts of this on, say, a heavy TPC-C import (which stresses L0 a lot).
Reviewed 1 of 1 files at r1, all commit messages.
Reviewable status: all files reviewed, 1 unresolved discussion (waiting on @jbowens)
compaction_picker.go, line 829 at r1 (raw file):
} func (p *compactionPickerByScore) calculateL0Score(inProgressCompactions []compactionInfo) float64 {
nit: we can probably remove this argument?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: all files reviewed, 2 unresolved discussions (waiting on @jbowens)
compaction_picker.go, line 778 at r1 (raw file):
scores[level].score = float64(levelSize) / float64(p.levelMaxBytes[level]) scores[level].origScore = scores[level].score }
So we are using the 64MB L0 goal, so < 64 files each of 1MB would not cause the score to exceed 1, yes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: all files reviewed, 2 unresolved discussions (waiting on @jbowens)
compaction_picker.go, line 778 at r1 (raw file):
Previously, sumeerbhola wrote…
So we are using the 64MB L0 goal, so < 64 files each of 1MB would not cause the score to exceed 1, yes?
Never mind, the 64MB was an Lbase goal. It seems reasonable to reuse the Lbase goal as the L0 goal too.
That way ~128MB would give a score of 2.
a9c253c
to
b2b0cf7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 2 of 2 files at r2, all commit messages.
Reviewable status: all files reviewed, 3 unresolved discussions (waiting on @jbowens)
testdata/compaction_picker_target_level, line 198 at r2 (raw file):
pick ongoing=(5,6) ---- L0->L4: 1.0
Do we want to add some dedicated test cases for when L0 is large? Can be a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @itsbilal, @jbowens, @nicktrav, and @sumeerbhola)
compaction_picker.go, line 778 at r1 (raw file):
Previously, sumeerbhola wrote…
Never mind, the 64MB was an Lbase goal. It seems reasonable to reuse the Lbase goal as the L0 goal too.
That way ~128MB would give a score of 2.
That makes sense to me.
compaction_picker.go, line 776 at r2 (raw file):
sizeAdjust := calculateSizeAdjust(inProgressCompactions) for level := 0; level < numLevels; level++ { levelSize := int64(levelCompensatedSize(p.vers.Levels[level])) + sizeAdjust[level]
A thought I had last night: the use of compensated size in L0 will be new. On one hand, this seems desirable, prioritizing compactions out of L0 if it contains tombstones. Compacting these tombstones first may reduce w-amp when compacting lower levels. On the other hand, once we pursue a compaction out of L0, picking which files are compacted will not prioritize the files with the tombstones.
testdata/compaction_picker_target_level, line 198 at r2 (raw file):
Previously, nicktrav (Nick Travers) wrote…
Do we want to add some dedicated test cases for when L0 is large? Can be a follow-up.
Sounds good, will add in followup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r3, all commit messages.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @itsbilal, @jbowens, @nicktrav, and @sumeerbhola)
Currently, L0's compaction picking score is determined only by the number of sublevels. This doesn't account for scenarios where L0 consists of many non-overlapping files. Adjust the compaction heuristic to the maximum of scores computed from sublevels and size. See cockroachdb#1623.
New change to set |
Closing in favor of #1632. |
Currently, L0's compaction picking score is determined only by the
number of sublevels. This doesn't account for scenarios where L0
consists of many non-overlapping files. Adjust the compaction heuristic
to the maximum of scores computed from sublevels and size.
See #1623.