Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

roachtest: backup/2TB/n10cpu4 failed #90119

Closed
cockroach-teamcity opened this issue Oct 18, 2022 · 2 comments · Fixed by #90352
Closed

roachtest: backup/2TB/n10cpu4 failed #90119

cockroach-teamcity opened this issue Oct 18, 2022 · 2 comments · Fixed by #90352
Assignees
Labels
branch-release-22.1 Used to mark GA and release blockers, technical advisories, and bugs for 22.1 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-disaster-recovery
Milestone

Comments

@cockroach-teamcity
Copy link
Member

cockroach-teamcity commented Oct 18, 2022

roachtest.backup/2TB/n10cpu4 failed with artifacts on release-22.1 @ 880c9a05d702dc493c9701095144b91aacb01752:

		  | golang.org/x/sync/errgroup.(*Group).Go.func1
		  | 	golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:57
		  | runtime.goexit
		  | 	GOROOT/src/runtime/asm_amd64.s:1581
		Wraps: (2) output in run_063029.903749586_n1_cockroach_sql
		Wraps: (3) ./cockroach sql --insecure -e "
		  | 				BACKUP bank.bank TO 'gs://cockroachdb-backup-testing/teamcity-6994321-1666070300-22-n10cpu4?AUTH=implicit'" returned
		  | stderr:
		  | ERROR: failed to run backup: exporting 1127 ranges: googleapi: got HTTP response code 503 with body:
		  | Failed running "sql"
		  |
		  | stdout:
		Wraps: (4) COMMAND_PROBLEM
		Wraps: (5) Node 1. Command with error:
		  | ``````
		  | ./cockroach sql --insecure -e "
		  | 				BACKUP bank.bank TO 'gs://cockroachdb-backup-testing/teamcity-6994321-1666070300-22-n10cpu4?AUTH=implicit'"
		  | ``````
		Wraps: (6) exit status 1
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError

	monitor.go:127,backup.go:716,test_runner.go:883: monitor failure: monitor task failed: t.Fatal() was called
		(1) attached stack trace
		  -- stack trace:
		  | main.(*monitorImpl).WaitE
		  | 	main/pkg/cmd/roachtest/monitor.go:115
		  | main.(*monitorImpl).Wait
		  | 	main/pkg/cmd/roachtest/monitor.go:123
		  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerBackup.func1
		  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/backup.go:716
		  | main.(*testRunner).runTest.func2
		  | 	main/pkg/cmd/roachtest/test_runner.go:883
		Wraps: (2) monitor failure
		Wraps: (3) attached stack trace
		  -- stack trace:
		  | main.(*monitorImpl).wait.func2
		  | 	main/pkg/cmd/roachtest/monitor.go:171
		Wraps: (4) monitor task failed
		Wraps: (5) attached stack trace
		  -- stack trace:
		  | main.init
		  | 	main/pkg/cmd/roachtest/monitor.go:80
		  | runtime.doInit
		  | 	GOROOT/src/runtime/proc.go:6498
		  | runtime.main
		  | 	GOROOT/src/runtime/proc.go:238
		  | runtime.goexit
		  | 	GOROOT/src/runtime/asm_amd64.s:1581
		Wraps: (6) t.Fatal() was called
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
Help

See: roachtest README

See: How To Investigate (internal)

Same failure on other branches

/cc @cockroachdb/disaster-recovery

This test on roachdash | Improve this report!

Jira issue: CRDB-20595

@cockroach-teamcity cockroach-teamcity added branch-release-22.1 Used to mark GA and release blockers, technical advisories, and bugs for 22.1 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. labels Oct 18, 2022
@cockroach-teamcity cockroach-teamcity added this to the 22.1 milestone Oct 18, 2022
@cockroach-teamcity
Copy link
Member Author

roachtest.backup/2TB/n10cpu4 failed with artifacts on release-22.1 @ 477af1d876e3e62b26900854c2f33eaa7cec73db:

		  | golang.org/x/sync/errgroup.(*Group).Go.func1
		  | 	golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:57
		  | runtime.goexit
		  | 	GOROOT/src/runtime/asm_amd64.s:1581
		Wraps: (2) output in run_063205.434406631_n1_cockroach_sql
		Wraps: (3) ./cockroach sql --insecure -e "
		  | 				BACKUP bank.bank TO 'gs://cockroachdb-backup-testing/teamcity-7053218-1666329629-21-n10cpu4?AUTH=implicit'" returned
		  | stderr:
		  | ERROR: failed to run backup: exporting 1125 ranges: googleapi: got HTTP response code 503 with body: Service Unavailable
		  | Failed running "sql"
		  |
		  | stdout:
		Wraps: (4) COMMAND_PROBLEM
		Wraps: (5) Node 1. Command with error:
		  | ``````
		  | ./cockroach sql --insecure -e "
		  | 				BACKUP bank.bank TO 'gs://cockroachdb-backup-testing/teamcity-7053218-1666329629-21-n10cpu4?AUTH=implicit'"
		  | ``````
		Wraps: (6) exit status 1
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) errors.Cmd (5) *hintdetail.withDetail (6) *exec.ExitError

	monitor.go:127,backup.go:716,test_runner.go:883: monitor failure: monitor task failed: t.Fatal() was called
		(1) attached stack trace
		  -- stack trace:
		  | main.(*monitorImpl).WaitE
		  | 	main/pkg/cmd/roachtest/monitor.go:115
		  | main.(*monitorImpl).Wait
		  | 	main/pkg/cmd/roachtest/monitor.go:123
		  | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerBackup.func1
		  | 	github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/backup.go:716
		  | main.(*testRunner).runTest.func2
		  | 	main/pkg/cmd/roachtest/test_runner.go:883
		Wraps: (2) monitor failure
		Wraps: (3) attached stack trace
		  -- stack trace:
		  | main.(*monitorImpl).wait.func2
		  | 	main/pkg/cmd/roachtest/monitor.go:171
		Wraps: (4) monitor task failed
		Wraps: (5) attached stack trace
		  -- stack trace:
		  | main.init
		  | 	main/pkg/cmd/roachtest/monitor.go:80
		  | runtime.doInit
		  | 	GOROOT/src/runtime/proc.go:6498
		  | runtime.main
		  | 	GOROOT/src/runtime/proc.go:238
		  | runtime.goexit
		  | 	GOROOT/src/runtime/asm_amd64.s:1581
		Wraps: (6) t.Fatal() was called
		Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError
Help

See: roachtest README

See: How To Investigate (internal)

Same failure on other branches

This test on roachdash | Improve this report!

@msbutler
Copy link
Collaborator

503 error that is tracked in #89057

rhu713 pushed a commit to rhu713/cockroach that referenced this issue Oct 24, 2022
Previously the retry policy for GCS was retry idempotent. This retry policy
allows reads to be retried, but was preventing the current write behavior from
being retried. Given the at-least-once nature of the existing use cases of the
GCS writer in CDC and backups, this patch changes the retry policy to always
retry.

Fixes cockroachdb#90119

Release note: None
craig bot pushed a commit that referenced this issue Oct 24, 2022
90352: cloud/gcp: use the retry always policy for gcs r=rhu713 a=rhu713

Previously the retry policy for GCS was retry idempotent. This retry policy allows reads to be retried, but was preventing the current write behavior from being retried. Given the at-least-once nature of the existing use cases of the GCS writer in CDC and backups, this patch changes the retry policy to always retry.

Fixes #90119

Release note: None

Co-authored-by: Rui Hu <[email protected]>
blathers-crl bot pushed a commit that referenced this issue Oct 24, 2022
Previously the retry policy for GCS was retry idempotent. This retry policy
allows reads to be retried, but was preventing the current write behavior from
being retried. Given the at-least-once nature of the existing use cases of the
GCS writer in CDC and backups, this patch changes the retry policy to always
retry.

Fixes #90119

Release note: None
blathers-crl bot pushed a commit that referenced this issue Oct 24, 2022
Previously the retry policy for GCS was retry idempotent. This retry policy
allows reads to be retried, but was preventing the current write behavior from
being retried. Given the at-least-once nature of the existing use cases of the
GCS writer in CDC and backups, this patch changes the retry policy to always
retry.

Fixes #90119

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
branch-release-22.1 Used to mark GA and release blockers, technical advisories, and bugs for 22.1 C-test-failure Broken test (automatically or manually discovered). O-roachtest O-robot Originated from a bot. release-blocker Indicates a release-blocker. Use with branch-release-2x.x label to denote which branch is blocked. T-disaster-recovery
Projects
No open projects
Archived in project
Development

Successfully merging a pull request may close this issue.

3 participants