-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
release-22.2: Changefeed backfill performance fixes #88915
Closed
miretskiy
wants to merge
11
commits into
cockroachdb:release-22.2
from
miretskiy:backport22.2-87968-88064-88370-87994-88395-88635-88672-88814
Closed
release-22.2: Changefeed backfill performance fixes #88915
miretskiy
wants to merge
11
commits into
cockroachdb:release-22.2
from
miretskiy:backport22.2-87968-88064-88370-87994-88395-88635-88672-88814
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Thanks for opening a backport. Please check the backport criteria before merging:
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
Add a brief release justification to the body of your PR to justify this backport. Some other things to consider:
|
miretskiy
force-pushed
the
backport22.2-87968-88064-88370-87994-88395-88635-88672-88814
branch
from
September 28, 2022 16:22
1bde344
to
22604ab
Compare
Add a micro benchmark for `tree.AsJSON` method. Release note: None Release justification: test only change
Improve performance of `tree.AsJSON` method. These improvements are important for any query that produces large number of JSON objects, as well as to changefeeds, which rely on this function when producing JSON encoded feed. Most of the changes revolved around modifying underlying types (s.a. date/timestamp types, box2d, etc) to favor using functions that append to bytes buffer, instead of relying on slower functions, such as `fmt.Sprintf`. The conversion performance improved around 5-10% for most of the types, and as high as 50% for time types: ``` Benchmark old t/op new t/op delta AsJSON/box2d-10 578ns ± 3% 414ns ± 2% -28.49% (p=0.000 n=10+9) AsJSON/box2d[]-10 1.64µs ± 3% 1.19µs ± 4% -27.14% (p=0.000 n=10+10) AsJSON/time-10 232ns ± 2% 103ns ± 1% -55.61% (p=0.000 n=10+10) AsJSON/time[]-10 687ns ± 4% 342ns ± 4% -50.17% (p=0.000 n=10+10) ``` Note: Some types in the local benchmark show slight slow down in speed. No changes were made in those types, and in general, the encoding speed of these types might be too fast to reliable detect changes: ``` Benchmark old t/op new t/op delta AsJSON/bool[]-10 65.9ns ± 1% 67.7ns ± 2% +2.79% (p=0.001 n=8+9) ``` The emphasis was also placed on reducing allocations. By relying more heavily on a pooled FmtCtx, which contains bytes buffer, some conversions resulted in amortized elimination of allocations (time): ``` Benchmark old B/op new t/op delta AsJSON/timestamp-10 42.1B ± 3% 0.0B -100.00% (p=0.000 n=10+10) AsJSON/timestamp[]-10 174B ± 4% 60B ± 1% -65.75% (p=0.000 n=10+10) ``` Release Note: None Release Justification: performance improvement
Add JSON encoder benchmark. Release note: None Release justification: test only change
Rewrite JSON encoder to improve its performance. Prior to this change JSON encoder was very inefficient. This inefficiency had multiple underlying reasons: * New Go map objects were constructed for each event. * Underlying json conversion functions had inefficiencies (tracked in cockroachdb#87968) * Conversion of Go maps to JSON incurs the cost of sorting the keys -- for each row. Sorting, particularly when rows are wide, has significant cost. * Each conversion to JSON allocated new array builder (to encode keys) and new object builder; that too has cost. * Underlying code structure, while attempting to reuse code when constructing different "envelope" formats, cause the code to be more inefficient. This PR addresses all of the above. In particular, since a schema version for the table is guaranteeed to have the same set of primary key and value columns, we can construct JSON builders once. The expensive sort operation can be performed once per version; builders can be memoized and cached. The performance impact is significant: * Key encoding speed up is 5-30%, depending on the number of primary keys. * Value encoding 30% - 60% faster (slowest being "wrapped" envelope with diff -- which effectively encodes 2x values) * Byte allocations per row reduces by over 70%, with the number of allocations reduced similarly. Release note (enterprise change): Changefeed JSON encoder performance improved by 50%. Release justification: performance improvement
Add metric to track when the cloud storage sink has to flush data due to the file size limit being reached. Fixes: cockroachdb#84435 Release note: None
Make `span.Frontier` thread safe by default. Release note: None
Previously, KV events were consumed and processed by changefeed aggregator using a single, synchronous Go routine. This PR makes it possible to run up to `changefeed.event_consumer_workers` consumers to consume events concurrently. The cluster setting `changefeed.event_consumer_worker_queue_size` is added to help control the number of concurrent events we can have in flight. Specifying `changefeed.event_consumer_workers=0` keeps existing single threaded implementation. Release note (enterprise change): This change adds the cluster setting `changefeed.event_consumer_workers` which allows changefeeds to process events concurrently.
Prior to this change, cloud storage sink trigger file sized based flush whenever new row would would push the file size beyond configured threshold. This had the effect of singificantly reducing the throughput whenever such event occured -- no additional events could be added to cloud storage sink, while the previus flush was active. This is not necessary. Cloud storage sink can trigger file based flushes asynchronously. The only requirement is that if a real, non file based, flush arrives, or if we need to emit resolved timestamps, then we must wait for all of the active flush requests to complete. In addition, because every event added to cloud sink has associate allocation, which is released when file is written out, performing flushes asynchronously is safe with respect to memory usage and accounting. Release note (enterprise change): Changefeeds, using cloud storage sink, now have better throughput. Release justification: performance fix
Expand the set of supported compression algorithms in changefeed. A faster implementation of gzip algorithm is avaible, and is used by default. The gzip algorithm implementation can be reverted to Go standard gzip implementation via the `changefeed.fast_gzip.enabled` setting. In addition, add support for compression files with zstd. Release notes (enterprise change): Changefeed can emit files compressed with zstd algorithm -- which provides good compression, and is much faster than gzip. In addition, a new, faster implementation of gzip is used by default.
By default, changefeed distributes the work to nodes based on which nodes are the lease holder for the ranges. This makes sense since running rangefeed against local node is more efficient. In a cluster where ranges are almost uniformly assigned to each node, running changefeed export is efficient: all nodes are busy, until they are done. KV server is responsible for making sure that the ranges are more or less uniformly distributed across the cluster; however, this determination is based on the set of all ranges in the cluster, and not based on a particular table. As a result, it is possible to have a table that does not uniform distribution of its ranges across all the nodes. When this happens, the changefeed export would take long time due to the long tail: as each node completes its set of assigned ranges, it idles until changefeed completes. This PR introduces a change (controlled via `changefeed.balance_range_distribution.enable` setting) where the changefeed try to produce a more balanced assignment, where each node is responsible for roughly 1/Nth of the work for the cluster of N nodes. Release note (enterprise change): Changefeed exports are up to 25% faster due to uniform work assignment.
Fix latent array avro encoding bug where previously allocated memo array might contain 'nil' element, while the code assumed that the element must always be a map. Release note: none.
miretskiy
force-pushed
the
backport22.2-87968-88064-88370-87994-88395-88635-88672-88814
branch
from
September 28, 2022 20:10
22604ab
to
78646ec
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Backport:
Please see individual PRs for details.
/cc @cockroachdb/release
Release Justification: Important changes related to changefeed backfill performance.