-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use larger docstore blocks in quickwit #1135
Comments
block sizes and total doc store size 16KB
160KB
1,6MB
|
bummer. :p What was the dataset? |
The dataset was hdfs. I can make a column oriented test. I think column oriented storage would be a good fit in combination with entropy encoding. |
The comparison is on a quite hacky version (ignoring 1:n fields and merge segments) Dataset: 1GB hdfs 160KB block-size row-store
160KB block-size column-store
|
that's a bummer. >.< My hunch was wrong then. (Aggregation in quickwit is higher priority ) |
quickwit-oss/tantivy#1374 suggests block size has a big influence. |
Closed by #1646 |
Tantivy's docstore blocks have been picked using the Lucene value.
Increasing this value should improve compression rate, espeically on logs.
Loki for instance, compresses 2x better (with snappy and larger blocks).
It should reduce the size of the hotcache a little.
On the other hand, it will impact the amount of data read and decompressed to fetch one doc.
In quickwit every fetch comes with a large latency which justifies larger blocks.
The text was updated successfully, but these errors were encountered: