Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-22.2: Changefeed backfill performance fixes #88915

Conversation

miretskiy
Copy link
Contributor

@miretskiy miretskiy commented Sep 28, 2022

Backport:

Please see individual PRs for details.

/cc @cockroachdb/release

Release Justification: Important changes related to changefeed backfill performance.

@miretskiy miretskiy added the do-not-merge bors won't merge a PR with this label. label Sep 28, 2022
@miretskiy miretskiy requested a review from a team September 28, 2022 14:40
@miretskiy miretskiy requested review from a team as code owners September 28, 2022 14:40
@miretskiy miretskiy requested review from ajwerner and removed request for a team September 28, 2022 14:40
@blathers-crl
Copy link

blathers-crl bot commented Sep 28, 2022

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@miretskiy miretskiy force-pushed the backport22.2-87968-88064-88370-87994-88395-88635-88672-88814 branch from 1bde344 to 22604ab Compare September 28, 2022 16:22
@miretskiy miretskiy requested a review from a team as a code owner September 28, 2022 16:22
Yevgeniy Miretskiy and others added 11 commits September 28, 2022 16:09
Add a micro benchmark for `tree.AsJSON` method.

Release note: None
Release justification: test only change
Improve performance of `tree.AsJSON` method.

These improvements are important for any query that produces
large number of JSON objects, as well as to changefeeds, which
rely on this function when producing JSON encoded feed.

Most of the changes revolved around modifying underlying types
(s.a. date/timestamp types, box2d, etc) to favor using functions
that append to bytes buffer, instead of relying on slower
functions, such as `fmt.Sprintf`.  The conversion
performance improved around 5-10% for most of the types, and
as high as 50% for time types:

```
Benchmark            old t/op      new t/op    delta
AsJSON/box2d-10    578ns ± 3%    414ns ± 2%   -28.49%  (p=0.000 n=10+9)
AsJSON/box2d[]-10  1.64µs ± 3%   1.19µs ± 4%  -27.14%  (p=0.000 n=10+10)
AsJSON/time-10     232ns ± 2%    103ns ± 1%   -55.61%  (p=0.000 n=10+10)
AsJSON/time[]-10   687ns ± 4%    342ns ± 4%   -50.17%  (p=0.000 n=10+10)
```

Note: Some types in the local benchmark show slight slow down in speed.
No changes were made in those types, and in general, the encoding speed
of these types might be too fast to reliable detect changes:
```
Benchmark            old t/op      new t/op       delta
AsJSON/bool[]-10    65.9ns ± 1%   67.7ns ± 2%    +2.79%  (p=0.001 n=8+9)
```

The emphasis was also placed on reducing allocations.
By relying more heavily on a pooled FmtCtx, which contains
bytes buffer, some conversions resulted in amortized
elimination of allocations (time):
```
Benchmark               old B/op      new t/op    delta
AsJSON/timestamp-10    42.1B ± 3%      0.0B      -100.00%  (p=0.000 n=10+10)
AsJSON/timestamp[]-10  174B ± 4%      60B ± 1%   -65.75%  (p=0.000 n=10+10)
```

Release Note: None
Release Justification: performance improvement
Add JSON encoder benchmark.

Release note: None
Release justification: test only change
Rewrite JSON encoder to improve its performance.

Prior to this change JSON encoder was very inefficient.
This inefficiency had multiple underlying reasons:
  * New Go map objects were constructed for each event.
  * Underlying json conversion functions had inefficiencies
    (tracked in cockroachdb#87968)
  * Conversion of Go maps to JSON incurs the cost
    of sorting the keys -- for each row. Sorting,
    particularly when rows are wide, has significant cost.
  * Each conversion to JSON allocated new array builder
    (to encode keys) and new object builder; that too has cost.
  * Underlying code structure, while attempting to reuse
    code when constructing different "envelope" formats,
    cause the code to be more inefficient.

This PR addresses all of the above.  In particular, since
a schema version for the table is guaranteeed to have
the same set of primary key and value columns, we can construct
JSON builders once.  The expensive sort operation can be performed
once per version; builders can be memoized and cached.

The performance impact is significant:
  * Key encoding speed up is 5-30%, depending on the number of primary
    keys.
  * Value encoding 30% - 60% faster (slowest being "wrapped" envelope
    with diff -- which effectively encodes 2x values)
  * Byte allocations per row reduces by over 70%, with the number
    of allocations reduced similarly.

Release note (enterprise change): Changefeed JSON encoder
performance improved by 50%.
Release justification: performance improvement
Add metric to track when the cloud storage sink has to flush
data due to the file size limit being reached.

Fixes: cockroachdb#84435
Release note: None
Make `span.Frontier` thread safe by default.

Release note: None
Previously, KV events were consumed and processed by changefeed
aggregator using a single, synchronous Go routine. This PR makes
it possible to run up to `changefeed.event_consumer_workers`
consumers to consume events concurrently. The cluster setting
`changefeed.event_consumer_worker_queue_size` is added to help
control the number of concurrent events we can have in flight.

Specifying `changefeed.event_consumer_workers=0` keeps existing
single threaded implementation.

Release note (enterprise change): This change adds the cluster setting
`changefeed.event_consumer_workers` which allows changefeeds to
process events concurrently.
Prior to this change, cloud storage sink trigger
file sized based flush whenever new row would
would push the file size beyond configured threshold.

This had the effect of singificantly reducing the throughput
whenever such event occured -- no additional events could
be added to cloud storage sink, while the previus flush was
active.

This is not necessary.  Cloud storage sink can trigger
file based flushes asynchronously.  The only requirement
is that if a real, non file based, flush arrives, or if we
need to emit resolved timestamps, then we must wait for
all of the active flush requests to complete.

In addition, because every event added to cloud sink has
associate allocation, which is released when file is written
out, performing flushes asynchronously is safe with respect
to memory usage and accounting.

Release note (enterprise change): Changefeeds, using cloud
storage sink, now have better throughput.
Release justification: performance fix
Expand the set of supported compression algorithms in changefeed.

A faster implementation of gzip algorithm is avaible, and is used
by default.  The gzip algorithm implementation can be reverted
to Go standard gzip implementation via the
`changefeed.fast_gzip.enabled` setting.

In addition, add support for compression files with zstd.

Release notes (enterprise change): Changefeed can emit files compressed
with zstd algorithm -- which provides good compression, and is much
faster than gzip.  In addition, a new, faster implementation of
gzip is used by default.
By default, changefeed distributes the work to nodes based
on which nodes are the lease holder for the ranges.
This makes sense since running rangefeed against local node
is more efficient.

In a cluster where ranges are almost uniformly assigned
to each node, running changefeed export is efficient:
all nodes are busy, until they are done.

KV server is responsible for making sure that the ranges
are more or less uniformly distributed across the cluster;
however, this determination is based on the set of all ranges
in the cluster, and not based on a particular table.

As a result, it is possible to have a table that does not
uniform distribution of its ranges across all the nodes.
When this happens, the changefeed export would take long
time due to the long tail: as each node completes its
set of assigned ranges, it idles until changefeed completes.

This PR introduces a change (controlled via
`changefeed.balance_range_distribution.enable` setting)
where the changefeed try to produce a more balanced
assignment, where each node is responsible for roughly
1/Nth of the work for the cluster of N nodes.

Release note (enterprise change): Changefeed exports are
up to 25% faster due to uniform work assignment.
Fix latent array avro encoding bug where previously allocated
memo array might contain 'nil' element, while the code assumed
that the element must always be a map.

Release note: none.
@miretskiy miretskiy force-pushed the backport22.2-87968-88064-88370-87994-88395-88635-88672-88814 branch from 22604ab to 78646ec Compare September 28, 2022 20:10
@dhartunian dhartunian removed the request for review from a team September 30, 2022 15:04
@miretskiy miretskiy closed this Nov 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do-not-merge bors won't merge a PR with this label.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants