We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi team,
I have an issue regarding the volumesnapshot, here are the logs of the pod:
I0730 12:00:22.711050 1 log.go:172] FLAG: --alsologtostderr="false" I0730 12:00:22.711100 1 log.go:172] FLAG: --backupsession="volumesnapshot-backup-1596110416" I0730 12:00:22.711104 1 log.go:172] FLAG: --bypass-validating-webhook-xray="false" I0730 12:00:22.711107 1 log.go:172] FLAG: --enable-analytics="true" I0730 12:00:22.711111 1 log.go:172] FLAG: --help="false" I0730 12:00:22.711114 1 log.go:172] FLAG: --kubeconfig="" I0730 12:00:22.711117 1 log.go:172] FLAG: --log-flush-frequency="5s" I0730 12:00:22.711121 1 log.go:172] FLAG: --log_backtrace_at=":0" I0730 12:00:22.711125 1 log.go:172] FLAG: --log_dir="" I0730 12:00:22.711128 1 log.go:172] FLAG: --logtostderr="true" I0730 12:00:22.711131 1 log.go:172] FLAG: --master="" I0730 12:00:22.711142 1 log.go:172] FLAG: --metrics-enabled="true" I0730 12:00:22.711148 1 log.go:172] FLAG: --pushgateway-url="http://stash-operator.kube-system.svc:56789" I0730 12:00:22.711153 1 log.go:172] FLAG: --service-name="stash-operator" I0730 12:00:22.711159 1 log.go:172] FLAG: --stderrthreshold="0" I0730 12:00:22.711163 1 log.go:172] FLAG: --target-kind="StatefulSet" I0730 12:00:22.711168 1 log.go:172] FLAG: --target-name="lllllaaaaa" I0730 12:00:22.711173 1 log.go:172] FLAG: --use-kubeapiserver-fqdn-for-aks="true" I0730 12:00:22.711181 1 log.go:172] FLAG: --v="3" I0730 12:00:22.711197 1 log.go:172] FLAG: --vmodule="" W0730 12:00:22.769473 1 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x16679f9] goroutine 1 [running]: stash.appscode.dev/stash/pkg/util.WaitUntilVolumeSnapshotReady.func1(0xc000cd2d40, 0x1391d4d, 0x22fc4c0) /src/pkg/util/kubernetes.go:450 +0xc9 k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc000a222c0, 0xc000cd2da8, 0xc000a222c0, 0x7f866e433008) /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:338 +0x2b k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x2faf080, 0x68c61714000, 0xc000cd2da8, 0xc000cd2dc8, 0x40cb48) /src/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:334 +0x4d stash.appscode.dev/stash/pkg/util.WaitUntilVolumeSnapshotReady(0x2a3eca0, 0xc0008bed20, 0xc000bb7aa0, 0x2c, 0x0, 0x0, 0xc000a37d00, 0x18, 0xc000700cf0, 0x86, ...) /src/pkg/util/kubernetes.go:448 +0x7e stash.appscode.dev/stash/pkg/cmds.(*VSoption).createVolumeSnapshot(0xc00019a0e0, 0xc000b72720, 0x20, 0x0, 0x0, 0xc000b72740, 0x18, 0xc000ab0280, 0x74, 0xc000b7a0f0, ...) /src/pkg/cmds/create_volumesnapshot.go:183 +0x258 stash.appscode.dev/stash/pkg/cmds.NewCmdCreateVolumeSnapshot.func1(0xc00061af00, 0xc000794fd0, 0x0, 0xb, 0x0, 0x0) /src/pkg/cmds/create_volumesnapshot.go:105 +0x4ca github.com/spf13/cobra.(*Command).execute(0xc00061af00, 0xc000794f20, 0xb, 0xb, 0xc00061af00, 0xc000794f20) /src/vendor/github.com/spf13/cobra/command.go:826 +0x460 github.com/spf13/cobra.(*Command).ExecuteC(0xc0006e0a00, 0x8, 0x0, 0x0) /src/vendor/github.com/spf13/cobra/command.go:914 +0x2fb github.com/spf13/cobra.(*Command).Execute(...) /src/vendor/github.com/spf13/cobra/command.go:864 main.main() /src/main.go:40 +0x8d
Here is the BackupConfiguration:
apiVersion: stash.appscode.com/v1beta1 kind: BackupConfiguration metadata: creationTimestamp: "2020-07-29T16:25:50Z" finalizers: - stash.appscode.com generation: 1 name: volumesnapshot-backup namespace: etbla resourceVersion: "127490240" selfLink: /apis/stash.appscode.com/v1beta1/namespaces/dev-eu-eth1parityropsten/backupconfigurations/volumesnapshot-backup uid: dea23704-e600-40a0-878e-47bf768e7b95 spec: driver: VolumeSnapshotter retentionPolicy: keepLast: 2 name: keep-last-2 prune: true schedule: '*/10 * * * *' target: ref: apiVersion: apps/v1 kind: StatefulSet name: etbla replicas: 1 snapshotClassName: csi-rbdplugin-snapclass status: observedGeneration: 1
I am using rook storageclass at https://github.com/rook/rook/blob/v1.3.0/cluster/examples/kubernetes/ceph/csi/rbd/snapshotclass.yaml
Any idea ?
The text was updated successfully, but these errors were encountered:
This issue has been fixed in master in #1073
Sorry, something went wrong.
Thanks hossain, so this is not available on current v0.9.0-rc.6, right ? Is it possible to manually test the patch without waiting ?
Yes. You can try the internal build. #1072 (comment)
No branches or pull requests
Hi team,
I have an issue regarding the volumesnapshot, here are the logs of the pod:
Here is the BackupConfiguration:
I am using rook storageclass at https://github.com/rook/rook/blob/v1.3.0/cluster/examples/kubernetes/ceph/csi/rbd/snapshotclass.yaml
Any idea ?
The text was updated successfully, but these errors were encountered: