-
Notifications
You must be signed in to change notification settings - Fork 179
Avoid chunk allocations and refactor compactions #118
Conversation
😮 ^5 @fabxc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks pretty good!
@@ -112,7 +113,7 @@ type BlockStats struct { | |||
type BlockMetaCompaction struct { | |||
// Maximum number of compaction cycles any source block has | |||
// gone through. | |||
Generation int `json:"generation"` | |||
Level int `json:"level"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How would this affect old data? The Level for the older ones would be 0 and I don't think any of it will be compacted and we could load the wrong heads.
Which is okay as we moved to "block-ranges" anyways, but I think this should be noted when we release.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are beta... it's just a breaking change .__.
chk.Ref = seq | uint64(w.n) | ||
|
||
n = binary.PutUvarint(b, uint64(len(chk.Chunk.Bytes()))) | ||
n := binary.PutUvarint(b[:], uint64(len(chk.Chunk.Bytes()))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious how does b[:]
affect it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We turned b
into an array instead of a slice, causing it to be allocated on the stack. Escape analysis by the compiler should've done this before too I thought. But it didn't catch it apparently, so I made it explicit.
Anyway, PutUvarint
wants a slice, not an array, and [:]
does exactly that.
@@ -265,7 +264,7 @@ func (w *chunkWriter) WriteChunks(chks ...*ChunkMeta) error { | |||
if err := chk.writeHash(w.crc32); err != nil { | |||
return err | |||
} | |||
if err := w.write(w.crc32.Sum(nil)); err != nil { | |||
if err := w.write(w.crc32.Sum(b[:0])); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious again, how does this affect it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as above.
@gouthamve
depends on #116
Compaction spikes are gone now. Only thing that's causing relevant allocations is rebuilding of postings lists. That can cause spikes with high series churn but TBD whether they are big enough to require optimizations.
I refactored the compactor along the way.