Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use larger docstore blocks in quickwit #1135

Closed
4 of 5 tasks
fulmicoton opened this issue Feb 14, 2022 · 8 comments
Closed
4 of 5 tasks

Use larger docstore blocks in quickwit #1135

fulmicoton opened this issue Feb 14, 2022 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@fulmicoton
Copy link
Contributor

fulmicoton commented Feb 14, 2022

Tantivy's docstore blocks have been picked using the Lucene value.

Increasing this value should improve compression rate, espeically on logs.
Loki for instance, compresses 2x better (with snappy and larger blocks).

It should reduce the size of the hotcache a little.
On the other hand, it will impact the amount of data read and decompressed to fetch one doc.

In quickwit every fetch comes with a large latency which justifies larger blocks.

  • study the size of the docstore on the hdfs dataset for different block sizes.
  • make the block size configurable in tantivy.
  • change the block size used in quickwit.
  • (optional) consider other compression algorithm.
  • (optional) be column oriented at the scale of a block? (would that help?)
@fulmicoton fulmicoton added the enhancement New feature or request label Feb 14, 2022
@PSeitz
Copy link
Contributor

PSeitz commented Mar 1, 2022

block sizes and total doc store size

16KB

140M    total

160KB

136M    total

1,6MB

135M    total

@fulmicoton
Copy link
Contributor Author

bummer. :p

What was the dataset?
Can you monkey write some code to test what happens if we are column oriented at the scale of a block?

@PSeitz
Copy link
Contributor

PSeitz commented Mar 1, 2022

The dataset was hdfs.

I can make a column oriented test. I think column oriented storage would be a good fit in combination with entropy encoding.

@PSeitz
Copy link
Contributor

PSeitz commented Mar 1, 2022

The comparison is on a quite hacky version (ignoring 1:n fields and merge segments)

Dataset: 1GB hdfs

160KB block-size row-store

9,9M    total

160KB block-size column-store

8,2M   total

@fulmicoton
Copy link
Contributor Author

fulmicoton commented Mar 3, 2022

that's a bummer. >.<

My hunch was wrong then.
We need to go deeper. The original problem is that in my tests, we did not compress as well as loki or clickhouse. Can you broadly investigate what could improve compression of the docstore, and possibly check if my original observation (loki compresses better) was correct or not.

(Aggregation in quickwit is higher priority )

@fulmicoton
Copy link
Contributor Author

quickwit-oss/tantivy#1374 suggests block size has a big influence.

@guilload
Copy link
Member

We resumed this work in the last few weeks and did observe that increasing the block size improves the compression rate. After analysis, we settle on the following default compression parameters:

  • compression algo: zstd
  • compression level: 8
  • input blocksize: 1M

This was implemented in #1646.

@guilload
Copy link
Member

Closed by #1646

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants