Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added diff of an InfoStore with a supplied Filter #4

Merged
merged 2 commits into from
Feb 16, 2014

Conversation

spencerkimball
Copy link
Member

Diffs determine approximate difference in number of keys known by the builder of the
Filter but unknown to the InfoStore. It's an heuristic for how good of
a gossip peer the builder of the Filter would be. A greater diff is
better.

Introduced a visitor pattern "visitInfos" for InfoStore to remove
boilerplate iteration code that had cropped up in 4 or 5 places.

Added unittests for diffing.

@spencerkimball
Copy link
Member Author

Pete, I'm kind of dropping you into the middle of things. Not sure how to do a pull request containing everything I've already pushed to the master branch. But maybe you can do a quick review of the whole gossip/ subdirectory.

@@ -181,6 +181,39 @@ func (is *InfoStore) AddInfo(info *Info) bool {
return true
}

// Visitor pattern to run two methods in the course of visiting all
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// visitInfos ... or else golint will yell

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, just saw all of your changes and am going to update this change and repush.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, I really just don't understand git. Is it generally a bad idea to try to rebase my branch after pulling master to get your lint changes? The rebase had conflicts as your lint change modified some things which were changed/deleted. I fixed those conflicts, added them and did git rebase --continue.

Now if I try to push, I get:

To https://github.com/spencerkimball/cockroach.git
! [rejected] spencerkimball/diff-filter -> spencerkimball/diff-filter (non-fast-forward)
error: failed to push some refs to 'https://github.com/spencerkimball/cockroach.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

It's not obvious from the git man page what (e.g. 'git pull ...') is supposed to actually mean in practice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Uhg hard to explain from my phone and my laptop is out of power

If you're the only person son aiming the branch rebasing is ok. But rebasing always rewrites history. Roughly, it reapplies the original patches in order on after the last commit you're naming (master in your case)

At first, avoid git pull since it does an implicit merge. Just always do git fetch followed by git merge. Once you're used to things git pull is a useful shortcut but can still burn you. GitHub has a good description of both.

Assuming no one else has done work on your topic branch your can git push -f. By default, git is preventing you from rewriting history in a remote / published branch

I can give you a good run down at the office

shawn

Sent from my iPhone

On Feb 15, 2014, at 13:10, Spencer Kimballd [email protected] wrote:

In gossip/infostore.go:

@@ -181,6 +181,39 @@ func (is *InfoStore) AddInfo(info *Info) bool {
return true
}

+// Visitor pattern to run two methods in the course of visiting all
Wow, I really just don't understand git. Is it generally a bad idea to try to rebase my branch after pulling master to get your lint changes? The rebase had conflicts as your lint change modified some things which were changed/deleted. I fixed those conflicts, added them and did git rebase --continue.

Now if I try to push, I get:

To https://github.com/spencerkimball/cockroach.git
! [rejected] spencerkimball/diff-filter -> spencerkimball/diff-filter (non-fast-forward)
error: failed to push some refs to 'https://github.com/spencerkimball/cockroach.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

It's not obvious from the git man page what (e.g. 'git pull ...') is supposed to actually mean in practice.


Reply to this email directly or view it on GitHub.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, starting to make a little more sense. Do people use rebase or do
people skip it--in general and at square?

On Sat, Feb 15, 2014 at 1:35 PM, Shawn Morel [email protected]:

In gossip/infostore.go:

@@ -181,6 +181,39 @@ func (is *InfoStore) AddInfo(info *Info) bool {
return true
}

+// Visitor pattern to run two methods in the course of visiting all

Uhg hard to explain from my phone and my laptop is out of power If you're
the only person son aiming the branch rebasing is ok. But rebasing always
rewrites history. Roughly, it reapplies the original patches in order on
after the last commit you're naming (master in your case) At first, avoid
git pull since it does an implicit merge. Just always do git fetch followed
by git merge. Once you're used to things git pull is a useful shortcut but
can still burn you. GitHub has a good description of both. Assuming no one
else has done work on your topic branch your can git push -f. By default,
git is preventing you from rewriting history in a remote / published branch
I can give you a good run down at the office shawn
... <#14436d4d3c1c47d4_>
Sent from my iPhone
On Feb 15, 2014, at 13:10, Spencer Kimballd [email protected]
wrote: In gossip/infostore.go: > @@ -181,6 +181,39 @@ func (is *InfoStore)
AddInfo(info *Info) bool { > return true > } > > +// Visitor pattern to run
two methods in the course of visiting all Wow, I really just don't
understand git. Is it generally a bad idea to try to rebase my branch after
pulling master to get your lint changes? The rebase had conflicts as your
lint change modified some things which were changed/deleted. I fixed those
conflicts, added them and did git rebase --continue. Now if I try to push,
I get: To https://github.com/spencerkimball/cockroach.git ! [rejected]
spencerkimball/diff-filter -> spencerkimball/diff-filter (non-fast-forward)
error: failed to push some refs to '
https://github.com/spencerkimball/cockroach.git' hint: Updates were
rejected because the tip of your current branch is behind hint: its remote
counterpart. Integrate the remote changes (e.g. hint: 'git pull ...')
before pushing again. hint: See the 'Note about fast-forwards' in 'git push
--help' for details. It's not obvious from the git man page what (e.g. 'git
pull ...') is supposed to actually mean in practice. -- Reply to this email
directly or view it on GitHub.

Reply to this email directly or view it on GitHubhttps://github.com/spencerkimball/cockroach/pull/4/files#r9773762
.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been rebasing at Square. Definitely check out gerrithub.io as soon as possible. I'm going blind looking at these unified diffs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been rebasing at Square. Definitely check out gerrithub.io as soon as possible. I'm going blind looking at these unified diffs.

Yes! I hate github PRs

approximate difference in number of keys known by the builder of the
Filter but unknown to the InfoStore. It's an heuristic for how good of
a gossip peer the builder of the Filter would be. A greater diff is
better.

Introduced a visitor pattern "visitInfos" for InfoStore to remove
boilerplate iteration code that had cropped up in 4 or 5 places.

Added unittests for diffing.
spencerkimball added a commit that referenced this pull request Feb 16, 2014
Added diff of an InfoStore with a supplied Filter
@spencerkimball spencerkimball merged commit 495da5f into master Feb 16, 2014
@spencerkimball spencerkimball deleted the spencerkimball/diff-filter branch February 17, 2014 01:48
tamird added a commit that referenced this pull request Jul 19, 2015
soniabhishek pushed a commit to soniabhishek/cockroach that referenced this pull request Feb 15, 2017
andy-kimball added a commit to andy-kimball/cockroach that referenced this pull request May 11, 2019
This test is occasionally flaking under heavy race stress in CI runs. Here
is the probable sequence of events:

1. A<-B merge starts and Subsume request locks down B.
2. Watcher on B sends PushTxn request, which is intercepted by the
   TestingRequestFilter in the test.
3. The merge txn aborts due to interference with the replica GC, since it's
   concurrently reading range descriptors.
4. The merge txn is retried, but the Watcher on B is locking the range, so it
   aborts again.
5. cockroachdb#4 repeats until the allowPushTxn channel fills up (it has capacity of 10).
   This causes a deadlock because the merge txn can't continue. Meanwhile, the
   watcher is blocked waiting for the results of the PushTxn request, which gets
   blocked waiting for the merge txn.

The fix is to get rid of the arbitrarily limited channel size of 10 and use
sync.Cond synchronization instead. Multiple retries of the merge txn will
repeatedly signal the Cond, rather than fill up the channel. One of the problems
with the channel was that there can be an imbalance between the number of items
sent to the channel (by merge txns) with the number of items received from the
channel (by the watcher). This imbalance meant the channel gradually filled up
until finally the right sequence of events caused deadlock.

Using a sync.Cond also fixes a race condition I saw several times, in which the
merge transaction tries to send to the channel while it is being concurrently
closed.

Release note: None
andy-kimball added a commit to andy-kimball/cockroach that referenced this pull request May 15, 2019
This test is occasionally flaking under heavy race stress in CI runs. Here
is the probable sequence of events:

1. A<-B merge starts and Subsume request locks down B.
2. Watcher on B sends PushTxn request, which is intercepted by the
   TestingRequestFilter in the test.
3. The merge txn aborts due to interference with the replica GC, since it's
   concurrently reading range descriptors.
4. The merge txn is retried, but the Watcher on B is locking the range, so it
   aborts again.
5. cockroachdb#4 repeats until the allowPushTxn channel fills up (it has capacity of 10).
   This causes a deadlock because the merge txn can't continue. Meanwhile, the
   watcher is blocked waiting for the results of the PushTxn request, which gets
   blocked waiting for the merge txn.

The fix is to get rid of the arbitrarily limited channel size of 10 and use
sync.Cond synchronization instead. Multiple retries of the merge txn will
repeatedly signal the Cond, rather than fill up the channel. One of the problems
with the channel was that there can be an imbalance between the number of items
sent to the channel (by merge txns) with the number of items received from the
channel (by the watcher). This imbalance meant the channel gradually filled up
until finally the right sequence of events caused deadlock.

Using a sync.Cond also fixes a race condition I saw several times, in which the
merge transaction tries to send to the channel while it is being concurrently
closed.

Fixes cockroachdb#37477

Release note: None
craig bot pushed a commit that referenced this pull request May 15, 2019
37477: storage: Fix deadlock in TestStoreRangeMergeSlowWatcher r=andy-kimball a=andy-kimball

This test is occasionally flaking under heavy race stress in CI runs. Here
is the probable sequence of events:

1. A<-B merge starts and Subsume request locks down B.
2. Watcher on B sends PushTxn request, which is intercepted by the
   TestingRequestFilter in the test.
3. The merge txn aborts due to interference with the replica GC, since it's
   concurrently reading range descriptors.
4. The merge txn is retried, but the Watcher on B is locking the range, so it
   aborts again.
5. #4 repeats until the allowPushTxn channel fills up (it has capacity of 10).
   This causes a deadlock because the merge txn can't continue. Meanwhile, the
   watcher is blocked waiting for the results of the PushTxn request, which gets
   blocked waiting for the merge txn.

The fix is to get rid of the arbitrarily limited channel size of 10 and use
sync.Cond synchronization instead. Multiple retries of the merge txn will
repeatedly signal the Cond, rather than fill up the channel. One of the problems
with the channel was that there can be an imbalance between the number of items
sent to the channel (by merge txns) with the number of items received from the
channel (by the watcher). This imbalance meant the channel gradually filled up
until finally the right sequence of events caused deadlock.

Using a sync.Cond also fixes a race condition I saw several times, in which the
merge transaction tries to send to the channel while it is being concurrently
closed.

Release note: None

Co-authored-by: Andrew Kimball <[email protected]>
tbg pushed a commit that referenced this pull request May 28, 2019
This test is occasionally flaking under heavy race stress in CI runs. Here
is the probable sequence of events:

1. A<-B merge starts and Subsume request locks down B.
2. Watcher on B sends PushTxn request, which is intercepted by the
   TestingRequestFilter in the test.
3. The merge txn aborts due to interference with the replica GC, since it's
   concurrently reading range descriptors.
4. The merge txn is retried, but the Watcher on B is locking the range, so it
   aborts again.
5. #4 repeats until the allowPushTxn channel fills up (it has capacity of 10).
   This causes a deadlock because the merge txn can't continue. Meanwhile, the
   watcher is blocked waiting for the results of the PushTxn request, which gets
   blocked waiting for the merge txn.

The fix is to get rid of the arbitrarily limited channel size of 10 and use
sync.Cond synchronization instead. Multiple retries of the merge txn will
repeatedly signal the Cond, rather than fill up the channel. One of the problems
with the channel was that there can be an imbalance between the number of items
sent to the channel (by merge txns) with the number of items received from the
channel (by the watcher). This imbalance meant the channel gradually filled up
until finally the right sequence of events caused deadlock.

Using a sync.Cond also fixes a race condition I saw several times, in which the
merge transaction tries to send to the channel while it is being concurrently
closed.

Fixes #37477

Release note: None
@tbg tbg mentioned this pull request Mar 16, 2022
craig bot pushed a commit that referenced this pull request Apr 29, 2022
79911: opt: refactor and test lookup join key column and expr generation r=mgartner a=mgartner

#### opt: simplify fetching outer column in CustomFuncs.findComputedColJoinEquality

Previously, `CustomFuncs.findComputedColJoinEquality` used
`CustomFuncs.OuterCols` to retrieve the outer columns of computed column
expressions. `CustomFuncs.OuterCols` returns the cached outer columns in
the expression if it is a `memo.ScalarPropsExpr`, and falls back to
calculating the outer columns with `memo.BuildSharedProps` otherwise.
Computed column expressions are never `memo.ScalarPropsExpr`s, so we use
just use `memo.BuildSharedProps` directly.

Release note: None

#### opt: make RemapCols a method on Factory instead of CustomFuncs

Release note: None

#### opt: use partial-index-reduced filters when building lookup expressions

This commit makes a minor change to `generateLookupJoinsImpl`.
Previously, equality filters were extracted from the original `ON`
filters. Now they are extracted from filters that have been reduced by
partial index implication. This has no effect on behavior because
equality filters that reference columns in two tables cannot exist in
partial index predicates, so they will never be eliminated during
partial index implication.

Release note: None

#### opt: moves some lookup join generation logic to lookup join package

This commit adds a new `lookupjoin` package. Logic for determining the
key columns and lookup expressions for lookup joins has been moved to
`lookupJoin.ConstraintBuilder`. The code was moved with as few changes
as possible, and the behavior does not change in any way. This move will
make it easier to test this code in isolation in the future, and allow
for further refactoring.

Release note: None

#### opt: generalize lookupjoin.ConstraintBuilder API

This commit makes the lookupjoin.ConstraintBuilder API more general to
make unit testing easier in a future commit.

Release note: None

#### opt: add data-driven tests for lookupjoin.ConstraintBuilder

Release note: None

#### opt: add lookupjoin.Constraint struct

The `lookupjoin.Constraint` struct has been added to encapsulate
multiple data structures that represent a strategy for constraining a
lookup join.

Release note: None

80511: pkg/cloud/azure: Support specifying Azure environments in storage URLs r=adityamaru a=nlowe-sx

The Azure Storage cloud provider learned a new parameter, AZURE_ENVIRONMENT,
which specifies which azure environment the storage account in question
belongs to. This allows cockroach to backup and restore data to Azure
Storage Accounts outside the main Azure Public Cloud. For backwards
compatibility, this defaults to "AzurePublicCloud" if AZURE_ENVIRONMENT
is not specified.
 
Fixes #47163
 
## Verification Evidence
 
I spun up a single node cluster:
 
```
nlowe@nlowe-z4l:~/projects/github/cockroachdb/cockroach [feat/47163-azure-storage-support-multiple-environments L|✚ 2] [🗓  2022-04-22 08:25:49]
$ bazel run //pkg/cmd/cockroach:cockroach -- start-single-node --insecure
WARNING: Option 'host_javabase' is deprecated
WARNING: Option 'javabase' is deprecated
WARNING: Option 'host_java_toolchain' is deprecated
WARNING: Option 'java_toolchain' is deprecated
INFO: Invocation ID: 11504a98-f767-413a-8994-8f92793c2ecf
INFO: Analyzed target //pkg/cmd/cockroach:cockroach (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
Target //pkg/cmd/cockroach:cockroach up-to-date:
  _bazel/bin/pkg/cmd/cockroach/cockroach_/cockroach
INFO: Elapsed time: 0.358s, Critical Path: 0.00s
INFO: 1 process: 1 internal.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
*
* WARNING: ALL SECURITY CONTROLS HAVE BEEN DISABLED!
*
* This mode is intended for non-production testing only.
*
* In this mode:
* - Your cluster is open to any client that can access any of your IP addresses.
* - Intruders with access to your machine or network can observe client-server traffic.
* - Intruders can log in without password and read or write any data in the cluster.
* - Intruders can consume all your server's resources and cause unavailability.
*
*
* INFO: To start a secure server without mandating TLS for clients,
* consider --accept-sql-without-tls instead. For other options, see:
*
* - https://go.crdb.dev/issue-v/53404/dev
* - https://www.cockroachlabs.com/docs/dev/secure-a-cluster.html
*
*
* WARNING: neither --listen-addr nor --advertise-addr was specified.
* The server will advertise "nlowe-z4l" to other nodes, is this routable?
*
* Consider using:
* - for local-only servers:  --listen-addr=localhost
* - for multi-node clusters: --advertise-addr=<host/IP addr>
*
*
CockroachDB node starting at 2022-04-22 15:25:55.461315977 +0000 UTC (took 2.1s)
build:               CCL unknown @  (go1.17.6)
webui:               http://nlowe-z4l:8080/
sql:                 postgresql://root@nlowe-z4l:26257/defaultdb?sslmode=disable
sql (JDBC):          jdbc:postgresql://nlowe-z4l:26257/defaultdb?sslmode=disable&user=root
RPC client flags:    /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach <client cmd> --host=nlowe-z4l:26257 --insecure
logs:                /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/logs
temp dir:            /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/cockroach-temp4100501952
external I/O path:   /home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data/extern
store[0]:            path=/home/nlowe/.cache/bazel/_bazel_nlowe/cf6ed4d0d14c8e474a5c30d572846d8a/execroot/cockroach/bazel-out/k8-fastbuild/bin/pkg/cmd/cockroach/cockroach_/cockroach.runfiles/cockroach/cockroach-data
storage engine:      pebble
clusterID:           bb3942d7-f241-4d26-aa4a-1bd0d6556e4d
status:              initialized new cluster
nodeID:              1
```
 
I was then able to view the contents of a backup hosted in an azure
government storage account:
 
```
root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***&AZURE_ENVIRONMENT=AzureUSGovernmentCloud'] WHERE object_type = 'database';
               object_name
------------------------------------------
  example_database
  ...
(17 rows)
 
Time: 5.859632889s
```
 
Omitting the `AZURE_ENVIRONMENT` parameter, we can see cockroach
defaults to the public cloud where my storage account does not exist:
 
```
root@:26257/defaultdb> SELECT DISTINCT object_name FROM [SHOW BACKUP 'azure://container/path/to/backup?AZURE_ACCOUNT_NAME=account&AZURE_ACCOUNT_KEY=***'] WHERE object_type = 'database';
ERROR: reading previous backup layers: unable to list files for specified blob: Get "https://account.blob.core.windows.net/container?comp=list&delimiter=path%2Fto%2Fbackup&restype=container&timeout=61": dial tcp: lookup account.blob.core.windows.net on 8.8.8.8:53: no such host
```
 
## Tests
 
Two new tests are added to verify that the storage account URL is correctly
built from the provided Azure Environment name, and that the Environment
defaults to the Public Cloud if unspecified for backwards compatibility. I
verified the existing tests pass against a government storage account after
specifying `AZURE_ENVIRONMENT` as `AzureUSGovernmentCloud` in the backup URL
query parameters:
 
```
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:38:26]
$ export AZURE_ACCOUNT_NAME=account
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:38:42]
$ export AZURE_ACCOUNT_KEY=***
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:39:25]
$ export AZURE_CONTAINER=container
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:39:48]
$ export AZURE_ENVIRONMENT=AzureUSGovernmentCloud
nlowe@nlowe-mbp:~/projects/github/cockroachdb/cockroachdb [feat/47163-azure-storage-support-multiple-environments| …3] [🗓  2022-04-22 17:40:15]
$ bazel test --test_output=streamed --test_arg=-test.v --action_env=AZURE_ACCOUNT_NAME --action_env=AZURE_ACCOUNT_KEY --action_env=AZURE_CONTAINER --action_env=AZURE_ENVIRONMENT //pkg/cloud/azure:azure_test
INFO: Invocation ID: aa88a942-f3c7-4df6-bade-8f5f0e18041f
WARNING: Streamed test output requested. All tests will be run locally, without sharding, one at a time
INFO: Build option --action_env has changed, discarding analysis cache.
INFO: Analyzed target //pkg/cloud/azure:azure_test (468 packages loaded, 16382 targets configured).
INFO: Found 1 test target...
initialized metamorphic constant "span-reuse-rate" with value 28
=== RUN   TestAzure
=== RUN   TestAzure/simple_round_trip
=== RUN   TestAzure/exceeds-4mb-chunk
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#00
    cloud_test_helpers.go:226: read 3345 of file at 4778744
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#1
    cloud_test_helpers.go:226: read 7228 of file at 226589
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#2
    cloud_test_helpers.go:226: read 634 of file at 256284
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#3
    cloud_test_helpers.go:226: read 7546 of file at 3546208
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#4
    cloud_test_helpers.go:226: read 24123 of file at 4821795
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#5
    cloud_test_helpers.go:226: read 16899 of file at 403428
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#6
    cloud_test_helpers.go:226: read 29467 of file at 4886370
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#7
    cloud_test_helpers.go:226: read 11700 of file at 1876920
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#8
    cloud_test_helpers.go:226: read 2928 of file at 489781
=== RUN   TestAzure/exceeds-4mb-chunk/rand-readats/#9
    cloud_test_helpers.go:226: read 19933 of file at 1483342
=== RUN   TestAzure/read-single-file-by-uri
=== RUN   TestAzure/write-single-file-by-uri
=== RUN   TestAzure/file-does-not-exist
=== RUN   TestAzure/List
=== RUN   TestAzure/List/root
=== RUN   TestAzure/List/file-slash-numbers-slash
=== RUN   TestAzure/List/root-slash
=== RUN   TestAzure/List/file
=== RUN   TestAzure/List/file-slash
=== RUN   TestAzure/List/slash-f
=== RUN   TestAzure/List/nothing
=== RUN   TestAzure/List/delim-slash-file-slash
=== RUN   TestAzure/List/delim-data
--- PASS: TestAzure (34.81s)
    --- PASS: TestAzure/simple_round_trip (9.66s)
    --- PASS: TestAzure/exceeds-4mb-chunk (16.45s)
        --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats (6.41s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#00 (0.15s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#1 (0.64s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#2 (0.65s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#3 (0.60s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#4 (0.75s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#5 (0.80s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#6 (0.75s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#7 (0.65s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#8 (0.65s)
            --- PASS: TestAzure/exceeds-4mb-chunk/rand-readats/#9 (0.77s)
    --- PASS: TestAzure/read-single-file-by-uri (0.60s)
    --- PASS: TestAzure/write-single-file-by-uri (0.60s)
    --- PASS: TestAzure/file-does-not-exist (1.05s)
    --- PASS: TestAzure/List (2.40s)
        --- PASS: TestAzure/List/root (0.30s)
        --- PASS: TestAzure/List/file-slash-numbers-slash (0.30s)
        --- PASS: TestAzure/List/root-slash (0.30s)
        --- PASS: TestAzure/List/file (0.30s)
        --- PASS: TestAzure/List/file-slash (0.30s)
        --- PASS: TestAzure/List/slash-f (0.30s)
        --- PASS: TestAzure/List/nothing (0.15s)
        --- PASS: TestAzure/List/delim-slash-file-slash (0.15s)
        --- PASS: TestAzure/List/delim-data (0.30s)
=== RUN   TestAntagonisticAzureRead
--- PASS: TestAntagonisticAzureRead (103.90s)
=== RUN   TestParseAzureURL
=== RUN   TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset
=== RUN   TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT
--- PASS: TestParseAzureURL (0.00s)
    --- PASS: TestParseAzureURL/Defaults_to_Public_Cloud_when_AZURE_ENVIRONEMNT_unset (0.00s)
    --- PASS: TestParseAzureURL/Can_Override_AZURE_ENVIRONMENT (0.00s)
=== RUN   TestMakeAzureStorageURLFromEnvironment
=== RUN   TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud
=== RUN   TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud
--- PASS: TestMakeAzureStorageURLFromEnvironment (0.00s)
    --- PASS: TestMakeAzureStorageURLFromEnvironment/AzurePublicCloud (0.00s)
    --- PASS: TestMakeAzureStorageURLFromEnvironment/AzureUSGovernmentCloud (0.00s)
PASS
Target //pkg/cloud/azure:azure_test up-to-date:
  _bazel/bin/pkg/cloud/azure/azure_test_/azure_test
INFO: Elapsed time: 159.865s, Critical Path: 152.35s
INFO: 66 processes: 2 internal, 64 darwin-sandbox.
INFO: Build completed successfully, 66 total actions
//pkg/cloud/azure:azure_test                                             PASSED in 139.9s
 
INFO: Build completed successfully, 66 total actions
```

80705: kvclient: fix gRPC stream leak in rangefeed client r=tbg,srosenberg a=erikgrinaker

When the DistSender rangefeed client received a `RangeFeedError` message
and propagated a retryable error up the stack, it would fail to close
the existing gRPC stream, causing stream/goroutine leaks.

Release note (bug fix): Fixed a goroutine leak when internal rangefeed
clients received certain kinds of retriable errors.

80762: joberror: add ConnectionReset/ConnectionRefused to retryable err allow list r=miretskiy a=adityamaru

Bulk jobs will no longer treat `sysutil.IsErrConnectionReset`
and `sysutil.IsErrConnectionRefused` as permanent errors. IMPORT,
RESTORE and BACKUP will treat this error as transient and retry.

Release note: None

80773: backupccl: break dependency to testcluster r=irfansharif a=irfansharif

Noticed we were building testing library packages when building CRDB
binaries.

    $ bazel query "somepath(//pkg/cmd/cockroach-short, //pkg/testutils/testcluster)"
    //pkg/cmd/cockroach-short:cockroach-short
    //pkg/cmd/cockroach-short:cockroach-short_lib
    //pkg/ccl:ccl
    //pkg/ccl/backupccl:backupccl
    //pkg/testutils/testcluster:testcluster

Release note: None

Co-authored-by: Marcus Gartner <[email protected]>
Co-authored-by: Nathan Lowe <[email protected]>
Co-authored-by: Erik Grinaker <[email protected]>
Co-authored-by: Aditya Maru <[email protected]>
Co-authored-by: irfan sharif <[email protected]>
pav-kv pushed a commit to pav-kv/cockroach that referenced this pull request Mar 5, 2024
Add script to generate and verify proto files
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants