Skip to content
This repository has been archived by the owner on Aug 13, 2019. It is now read-only.

Avoid chunk allocations and refactor compactions #118

Merged
merged 3 commits into from
Aug 10, 2017
Merged

Conversation

fabxc
Copy link
Contributor

@fabxc fabxc commented Aug 9, 2017

@gouthamve

depends on #116
Compaction spikes are gone now. Only thing that's causing relevant allocations is rebuilding of postings lists. That can cause spikes with high series churn but TBD whether they are big enough to require optimizations.

I refactored the compactor along the way.

screen shot 2017-08-09 at 08 55 37

@grobie
Copy link
Contributor

grobie commented Aug 9, 2017

😮 ^5 @fabxc

Copy link
Collaborator

@gouthamve gouthamve left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty good!

@@ -112,7 +113,7 @@ type BlockStats struct {
type BlockMetaCompaction struct {
// Maximum number of compaction cycles any source block has
// gone through.
Generation int `json:"generation"`
Level int `json:"level"`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would this affect old data? The Level for the older ones would be 0 and I don't think any of it will be compacted and we could load the wrong heads.

Which is okay as we moved to "block-ranges" anyways, but I think this should be noted when we release.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are beta... it's just a breaking change .__.

chk.Ref = seq | uint64(w.n)

n = binary.PutUvarint(b, uint64(len(chk.Chunk.Bytes())))
n := binary.PutUvarint(b[:], uint64(len(chk.Chunk.Bytes())))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious how does b[:] affect it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We turned b into an array instead of a slice, causing it to be allocated on the stack. Escape analysis by the compiler should've done this before too I thought. But it didn't catch it apparently, so I made it explicit.

Anyway, PutUvarint wants a slice, not an array, and [:] does exactly that.

@@ -265,7 +264,7 @@ func (w *chunkWriter) WriteChunks(chks ...*ChunkMeta) error {
if err := chk.writeHash(w.crc32); err != nil {
return err
}
if err := w.write(w.crc32.Sum(nil)); err != nil {
if err := w.write(w.crc32.Sum(b[:0])); err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Curious again, how does this affect it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above.

@fabxc fabxc merged commit f9f439c into allocsave Aug 10, 2017
@fabxc fabxc deleted the allocsave2 branch August 10, 2017 12:45
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants