-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
roachtest: fix fixtures import
on release branches
#37726
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You ran a smoke check on this, right?
Yeah, and it caught that the cockroach binary is not uploaded to node 4 in the cdc test, so I'm going to hold off on merging until I verify all three affected tests. |
In cockroachdb#36855, we bumped the workload-version of tpcc from 2.0.1 to 2.1.0. The IMPORT commands generated by `fixtures import` include the version built in so that all the nodes processing the IMPORT (which internally synthesize the data) can check against their internal version. This prevents nodes from generating incompatible data, but an unfortunate side effect of this is that the tpcc version in the workload binary must match the one in the cockroach nodes. This works for roachtests in master, but not for release branches, which use a workload binary built from master. Instead, use the `cockroach workload fixtures import` subcommand, which is guaranteed to match. Release note: None
TFTRs! Manual runs of the three modified tests are passing now bors r=tbg,nvanbenschoten |
Build failed (retrying...) |
Build failed (retrying...) |
Build failed (retrying...) |
37719: kv: finish and enable parallel commits r=nvanbenschoten a=nvanbenschoten The majority of this PR is addressing the final work item for parallel commits - splitting pre-commit QueryIntents from parallel committing EndTransaction requests. If a batch is performing a parallel commit and the transaction has previously pipelined writes that have yet to be proven successful then the EndTransaction request will be preceded by a series of QueryIntent requests (see txn_pipeliner.go). Before evaluating, each of these QueryIntent requests will grab latches and wait for their corresponding write to finish. This is how the QueryIntent requests synchronize with the write they are trying to verify. If these QueryIntents remained in the same batch as the EndTransaction request then they would force the EndTransaction request to wait for the previous write before evaluating itself. This "pipeline stall" would effectively negate the benefit of the parallel commit. To avoid this, we make sure that these "pre-commit" QueryIntent requests are split from and issued concurrently with the rest of the parallel commit batch. With this complete, parallel commits is then able to be enabled. ### Benchmarks One of my recent experiments was to run the `indexes` benchmark at varying numbers of secondary indexes on a 3-node geo-distributed cluster. Below are plots with and without parallel commits enabled. We can see that without parallel commits, the introduction of the first secondary index doubles p50 latency and reduces throughput (at a fixed concurrency) by 56%. With parallel commits, the introduction of the secondary index has almost no noticeable effect on p50 latency and reduces throughput by only 29%. ![Secondary Indexes Throughput With Parallel Commits (1)](https://user-images.githubusercontent.com/5438456/58153546-50aee800-7c3d-11e9-9845-ca5ddf9e0779.png) ![Secondary Indexes p50 Latency With Parallel Commits (1)](https://user-images.githubusercontent.com/5438456/58153550-5278ab80-7c3d-11e9-8b9f-bcc1d702e39f.png) I expect I'll polish off more comparison benchmarks over the next few weeks to further demonstrate the effect of parallel commits. 37726: roachtest: fix `fixtures import` on release branches r=tbg,nvanbenschoten a=danhhz In #36855, we bumped the workload-version of tpcc from 2.0.1 to 2.1.0. The IMPORT commands generated by `fixtures import` include the version built in so that all the nodes processing the IMPORT (which internally synthesize the data) can check against their internal version. This prevents nodes from generating incompatible data, but an unfortunate side effect of this is that the tpcc version in the workload binary must match the one in the cockroach nodes. This works for roachtests in master, but not for release branches, which use a workload binary built from master. Instead, use the `cockroach workload fixtures import` subcommand, which is guaranteed to match. Release note: None Co-authored-by: Nathan VanBenschoten <[email protected]> Co-authored-by: Daniel Harrison <[email protected]>
Build succeeded |
In cockroachdb#37726, I switched a few places use the `workload fixtures import` in the cockroach cli so the version string would match between the `fixtures import` command and whatever was producing the data, but I should have left this one alone. It's using `--csv-server` so, in contrast to the other places I fixed that that PR, in this case the workload binary is both ends. (`--csv-server` causes cockroach to read csv data from an http server inside the standalone workload command.) This broke because `fixtures import` works with the `--csv-server` flag on master, but that hasn't been backported to anything else, so the workload built into the cockroach cli doesn't know about that flag for fixtures import. My smoke tests didn't catch this because I was using a cockroach binary built from master. Hopefully this is the last of the fallout. Touches cockroachdb#37371 Release note: None
37915: roachtest: fix import/tpcc/warehouses=1000/nodes=4 r=tbg a=danhhz In #37726, I switched a few places use the `workload fixtures import` in the cockroach cli so the version string would match between the `fixtures import` command and whatever was producing the data, but I should have left this one alone. It's using `--csv-server` so, in contrast to the other places I fixed that that PR, in this case the workload binary is both ends. (`--csv-server` causes cockroach to read csv data from an http server inside the standalone workload command.) This broke because `fixtures import` works with the `--csv-server` flag on master, but that hasn't been backported to anything else, so the workload built into the cockroach cli doesn't know about that flag for fixtures import. My smoke tests didn't catch this because I was using a cockroach binary built from master. Hopefully this is the last of the fallout. Touches #37371 Release note: None Co-authored-by: Daniel Harrison <[email protected]>
This was missed during cockroachdb#37726. Closes cockroachdb#37488. Touches cockroachdb#37163. Release note: None
In #36855, we bumped the workload-version of tpcc from 2.0.1 to 2.1.0.
The IMPORT commands generated by
fixtures import
include the versionbuilt in so that all the nodes processing the IMPORT (which internally
synthesize the data) can check against their internal version. This
prevents nodes from generating incompatible data, but an unfortunate
side effect of this is that the tpcc version in the workload binary must
match the one in the cockroach nodes.
This works for roachtests in master, but not for release branches, which
use a workload binary built from master. Instead, use the
cockroach workload fixtures import
subcommand, which is guaranteed to match.Release note: None