[~/velero-ws/velero] (main): CLOUD_PROVIDER=aws VSL_CONFIG=region=us-east-2 BSL_CONFIG=region=us-east-2 CREDS_FILE=/Users/jiangd/tmp/velero-test/aws-credential BSL_BUCKET=jt-velero-test-upgrade ADDITIONAL_OBJECT_STORE_PROVIDER=aws ADDITIONAL_BSL_CONFIG=region=us-east-2 ADDITIONAL_BSL_BUCKET=jt-additional-bucket ADDITIONAL_CREDS_FILE=/Users/jiangd/tmp/velero-test/additional-aws-credential GINKGO_FOCUS='\[Snapshot\] Velero tests on cluster' VELERO_IMAGE=velero/velero:v1.7.0-rc.1 REGISTRY_CREDENTIAL_FILE=/Users/jiangd/tmp/docker-cred.json VERSION=v1.7.0-rc.1 GOPATH=/Users/jiangd/go/ make test-e2e GOOS=darwin \ GOARCH=amd64 \ VERSION=v1.7.0-rc.1 \ REGISTRY=velero \ PKG=github.com/vmware-tanzu/velero \ BIN=velero \ GIT_SHA=6f64052e94ef71c9d360863f341fe3c11e319f08 \ GIT_TREE_STATE=dirty \ OUTPUT_DIR=$(pwd)/_output/bin/darwin/amd64 \ ./hack/build.sh /Library/Developer/CommandLineTools/usr/bin/make -e VERSION=v1.7.0-rc.1 -C test/e2e run go get github.com/onsi/ginkgo/ginkgo Using credentials from /Users/jiangd/tmp/velero-test/aws-credential Using bucket jt-velero-test-upgrade to store backups from E2E tests Using cloud provider aws Running Suite: E2e Suite ======================== Random Seed: 1631945832 Will run 2 of 9 specs SSSSSS ------------------------------ [Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload should be successfully backed up and restored to the default BackupStorageLocation /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:75 Running cmd "/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero install --namespace velero --crds-version v1 --image velero/velero:v1.7.0-rc.1 --use-volume-snapshots --provider aws --backup-location-config region=us-east-2 --bucket jt-velero-test-upgrade --secret-file /Users/jiangd/tmp/velero-test/aws-credential --snapshot-location-config region=us-east-2 --plugins velero/velero-plugin-for-aws:v1.3.0-rc.1 --dry-run --output json --crds-only" Applying velero CRDs... customresourcedefinition.apiextensions.k8s.io/backups.velero.io created customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created customresourcedefinition.apiextensions.k8s.io/resticrepositories.velero.io created customresourcedefinition.apiextensions.k8s.io/restores.velero.io created customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created Waiting velero CRDs ready... customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met customresourcedefinition.apiextensions.k8s.io/resticrepositories.velero.io condition met customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met Running cmd "/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero install --namespace velero --crds-version v1 --image velero/velero:v1.7.0-rc.1 --use-volume-snapshots --provider aws --backup-location-config region=us-east-2 --bucket jt-velero-test-upgrade --secret-file /Users/jiangd/tmp/velero-test/aws-credential --snapshot-location-config region=us-east-2 --plugins velero/velero-plugin-for-aws:v1.3.0-rc.1 --dry-run --output json" image pull secret "image-pull-secret" set for velero serviceaccount Running cmd "/usr/local/bin/kubectl apply -f -" customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/resticrepositories.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged namespace/velero created clusterrolebinding.rbac.authorization.k8s.io/velero created serviceaccount/velero created secret/cloud-credentials created backupstoragelocation.velero.io/default created volumesnapshotlocation.velero.io/default created deployment.apps/velero created secret/image-pull-secret created Waiting for Velero deployment to be ready. Velero is installed and ready to be tested in the velero namespace! ⛵ Waiting for kibishii jump-pad pod to be ready Waiting for kibishii pods to be ready kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 backup cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero create backup backup-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7 --include-namespaces kibishii-workload --wait --snapshot-volumes Backup request "backup-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7" submitted successfully. Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. ... Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7` and `velero backup logs backup-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7`. get backup cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero backup get -o json backup-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7 Simulating a disaster by removing namespace kibishii-workload restore cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero create restore restore-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7 --from-backup backup-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7 --wait Restore request "restore-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7" submitted successfully. Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. .............. Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7` and `velero restore logs restore-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7`. get restore cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero restore get -o json restore-662cd99e-8e7d-4c7c-a130-5a16ad74b7f7 Waiting for kibishii pods to be ready Pod etcd0 is in state Pending waiting for it to be Running Pod etcd0 is in state Pending waiting for it to be Running Pod etcd1 is in state Pending waiting for it to be Running Pod etcd2 is in state Pending waiting for it to be Running Pod etcd2 is in state Pending waiting for it to be Running Pod etcd2 is in state Pending waiting for it to be Running Pod etcd2 is in state Pending waiting for it to be Running running kibishii verify kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 kibishii test completed successfully Velero uninstalled ⛵ • [SLOW TEST:372.527 seconds] [Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:35 when kibishii is the sample workload /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:74 should be successfully backed up and restored to the default BackupStorageLocation /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:75 ------------------------------ [Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload should successfully back up and restore to an additional BackupStorageLocation with unique credentials /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:84 Running cmd "/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero install --namespace velero --crds-version v1 --image velero/velero:v1.7.0-rc.1 --use-volume-snapshots --provider aws --backup-location-config region=us-east-2 --bucket jt-velero-test-upgrade --secret-file /Users/jiangd/tmp/velero-test/aws-credential --snapshot-location-config region=us-east-2 --plugins velero/velero-plugin-for-aws:v1.3.0-rc.1 --dry-run --output json --crds-only" Applying velero CRDs... customresourcedefinition.apiextensions.k8s.io/backups.velero.io created customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io created customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io created customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io created customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io created customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io created customresourcedefinition.apiextensions.k8s.io/resticrepositories.velero.io created customresourcedefinition.apiextensions.k8s.io/restores.velero.io created customresourcedefinition.apiextensions.k8s.io/schedules.velero.io created customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io created customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io created Waiting velero CRDs ready... customresourcedefinition.apiextensions.k8s.io/backups.velero.io condition met customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io condition met customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io condition met customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io condition met customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io condition met customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io condition met customresourcedefinition.apiextensions.k8s.io/resticrepositories.velero.io condition met customresourcedefinition.apiextensions.k8s.io/restores.velero.io condition met customresourcedefinition.apiextensions.k8s.io/schedules.velero.io condition met customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io condition met customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io condition met Running cmd "/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero install --namespace velero --crds-version v1 --image velero/velero:v1.7.0-rc.1 --use-volume-snapshots --provider aws --backup-location-config region=us-east-2 --bucket jt-velero-test-upgrade --secret-file /Users/jiangd/tmp/velero-test/aws-credential --snapshot-location-config region=us-east-2 --plugins velero/velero-plugin-for-aws:v1.3.0-rc.1 --dry-run --output json" image pull secret "image-pull-secret" set for velero serviceaccount Running cmd "/usr/local/bin/kubectl apply -f -" customresourcedefinition.apiextensions.k8s.io/backups.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/backupstoragelocations.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/deletebackuprequests.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/downloadrequests.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/podvolumebackups.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/podvolumerestores.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/resticrepositories.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/restores.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/schedules.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/serverstatusrequests.velero.io unchanged customresourcedefinition.apiextensions.k8s.io/volumesnapshotlocations.velero.io unchanged namespace/velero created clusterrolebinding.rbac.authorization.k8s.io/velero created serviceaccount/velero created secret/cloud-credentials created backupstoragelocation.velero.io/default created volumesnapshotlocation.velero.io/default created deployment.apps/velero created secret/image-pull-secret created Waiting for Velero deployment to be ready. Velero is installed and ready to be tested in the velero namespace! ⛵ An error occurred: Deployment.apps "velero" is invalid: spec.template.spec.initContainers[1].name: Duplicate value: "velero-velero-plugin-for-aws" Backup storage location "bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70" configured successfully. Waiting for kibishii jump-pad pod to be ready Waiting for kibishii pods to be ready kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 backup cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero create backup backup-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 --include-namespaces kibishii-workload --wait --snapshot-volumes --storage-location default Backup request "backup-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70" submitted successfully. Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. ... Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70` and `velero backup logs backup-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70`. get backup cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero backup get -o json backup-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 Simulating a disaster by removing namespace kibishii-workload restore cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero create restore restore-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 --from-backup backup-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 --wait Restore request "restore-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70" submitted successfully. Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. ............ Restore completed with status: Completed. You may check for more information using the commands `velero restore describe restore-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70` and `velero restore logs restore-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70`. get restore cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero restore get -o json restore-default-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 Waiting for kibishii pods to be ready Pod etcd0 is in state Pending waiting for it to be Running Pod etcd0 is in state Pending waiting for it to be Running Pod etcd0 is in state Pending waiting for it to be Running Pod etcd0 is in state Pending waiting for it to be Running Pod etcd0 is in state Pending waiting for it to be Running running kibishii verify kibishiiVerifyCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/verify.sh 2 10 10 1024 1024 0 2 kibishii test completed successfully Waiting for kibishii jump-pad pod to be ready Waiting for kibishii pods to be ready kibishiiGenerateCmd cmd =/usr/local/bin/kubectl exec -n kibishii-workload jump-pad -- /usr/local/bin/generate.sh 2 10 10 1024 1024 0 2 backup cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero create backup backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 --include-namespaces kibishii-workload --wait --snapshot-volumes --storage-location bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 Backup request "backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70" submitted successfully. Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background. ... Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70` and `velero backup logs backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70`. get backup cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero backup get -o json backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 Simulating a disaster by removing namespace kibishii-workload restore cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero create restore restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 --from-backup backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 --wait Restore request "restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70" submitted successfully. Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background. ... Restore completed with status: PartiallyFailed. You may check for more information using the commands `velero restore describe restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70` and `velero restore logs restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70`. get restore cmd =/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero --namespace velero restore get -o json restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 debug cmd=/Users/jiangd/velero-ws/velero/test/e2e/../../_output/bin/darwin/amd64/velero debug --namespace velero --output debug-bundle-1631946757127472000.tar.gz --restore restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 Generating the debug tarball at debug-bundle-1631946757127472000.tar.gz 2021/09/18 14:32:39 Collecting velero resources in namespace: velero 2021/09/18 14:32:52 Collecting velero deployment logs in namespace: velero 2021/09/18 14:32:59 Collecting log for restore: restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 2021/09/18 14:33:05 Generated debug information bundle: /Users/jiangd/velero-ws/velero/test/e2e/debug-bundle-1631946757127472000.tar.gz Velero uninstalled ⛵ • Failure [635.847 seconds] [Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:35 when kibishii is the sample workload /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:74 should successfully back up and restore to an additional BackupStorageLocation with unique credentials [It] /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:84 Failed to successfully backup and restore Kibishii namespace using BSL bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 Expected success, but got an error: <*errors.withStack | 0xc0004274b8>: { error: { cause: { msg: "Unexpected restore phase got PartiallyFailed, expecting Completed", stack: [0x229e7ea, 0x229f6f9, 0x2299aec, 0x22ae41a, 0x22584e3, 0x22580fc, 0x22576c7, 0x225b5cf, 0x225ac72, 0x22698f1, 0x2269407, 0x2268bf7, 0x226b306, 0x2277f38, 0x2277c76, 0x22a3b8b, 0x111d84f, 0x1073d81], }, msg: "Restore restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 failed from backup backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70", }, stack: [0x2299df8, 0x22ae41a, 0x22584e3, 0x22580fc, 0x22576c7, 0x225b5cf, 0x225ac72, 0x22698f1, 0x2269407, 0x2268bf7, 0x226b306, 0x2277f38, 0x2277c76, 0x22a3b8b, 0x111d84f, 0x1073d81], } Restore restore-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70 failed from backup backup-bsl-49baf504-ba66-48d9-b2b5-ed8cd93c5d70: Unexpected restore phase got PartiallyFailed, expecting Completed /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:134 ------------------------------ S JUnit report was created: /Users/jiangd/velero-ws/velero/test/e2e/report.xml Summarizing 1 Failure: [Fail] [Snapshot] Velero tests on cluster using the plugin provider for object storage and snapshots for volume backups when kibishii is the sample workload [It] should successfully back up and restore to an additional BackupStorageLocation with unique credentials /Users/jiangd/velero-ws/velero/test/e2e/backup_test.go:134 Ran 2 of 9 Specs in 1008.374 seconds FAIL! -- 1 Passed | 1 Failed | 0 Pending | 7 Skipped --- FAIL: TestE2e (1024.97s) You're using deprecated Ginkgo functionality: ============================================= Ginkgo 2.0 is under active development and will introduce (a small number of) breaking changes. To learn more, view the migration guide at https://github.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md To comment, chime in at https://github.com/onsi/ginkgo/issues/711 You are using a custom reporter. Support for custom reporters will likely be removed in V2. Most users were using them to generate junit or teamcity reports and this functionality will be merged into the core reporter. In addition, Ginkgo 2.0 will support emitting a JSON-formatted report that users can then manipulate to generate custom reports. If this change will be impactful to you please leave a comment on https://github.com/onsi/ginkgo/issues/711 Learn more at: https://giFAIL thub.com/onsi/ginkgo/blob/v2/docs/MIGRATING_TO_V2.md#removed-custom-reporters To silence deprecations that can be silenced set the following environment variable: ACK_GINKGO_DEPRECATIONS=1.16.4 Ginkgo ran 1 suite in 17m10.312143555s Test Suite Failed make[1]: *** [run] Error 1 make: *** [test-e2e] Error 2