Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

During volume create, SDK should not return existing volume not in up state #1180

Closed
harsh-px opened this issue Jul 25, 2019 · 2 comments · Fixed by #1199
Closed

During volume create, SDK should not return existing volume not in up state #1180

harsh-px opened this issue Jul 25, 2019 · 2 comments · Fixed by #1199
Assignees

Comments

@harsh-px
Copy link
Contributor

BUG REPORT:

SDK returned an existing volume which will still in down state.

What happened:

  • 2 volume creates came in at the same time for the same volume name
  • While the first create call will still creating the volume, the 2nd create returned the volume as it existed in the inspect response. The volume was still in down state. This caused the k8s PVC to bind to a down volume. The test then created a snapshot in down state since the parent volume was in down state.

https://github.com/libopenstorage/openstorage/blob/master/api/server/sdk/volume_ops.go#L45

What you expected to happen:

Return volume only if it is in up state.

How to reproduce it (as minimally and precisely as possible):

  1. Trigger 1st create volume call and it should put volume in down state before the 2nd call starts. Don't return from here
  2. Trigger 2nd create call for the same name. In current code, it will return a down state volume.

Anything else we need to know?:

Environment:

  • Container Orchestrator and version: k8s on openshift (can be reproduced outside of k8s using above)
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools:
  • Others:
@lpabon lpabon self-assigned this Jul 25, 2019
@lpabon
Copy link
Member

lpabon commented Jul 31, 2019

Are these some of the scenarios?

Scenario 1:

  1. Delete volume with name A. SDK returns ok, but the driver is still deleting.
  2. Create a volume with the name A, SDK returns ok volume id since the old volume is still around and deleting.

Scenario 2:

  1. Create a volume with name A
  2. SDK returns with volume id for name A, but the volume is still not ready since it is being prepared by the driver.

@harsh-px
Copy link
Contributor Author

harsh-px commented Aug 1, 2019

Scenario 1:

Yes, we have seen this scenario too in a different torpedo test.

Scenario 2:

Slightly changing this scenario to match the original issue more precisely.

  1. time=0sec: Create a volume with name A
  2. time=1sec: Another create request for volume with same name A
  3. time=2sec: SDK returns with volume id for name A for the 2nd request but but the volume is still not ready since it is being prepared by the driver as part of the first request.
  4. time=3sec Caller for second request sees the volume create request succeeded and starts using the volume which still wasn't prepared
  5. time=4sec: First volume create request is now complete

lpabon added a commit to lpabon/openstorage that referenced this issue Aug 19, 2019
With this fix, any call creating a volume and noticing it already
exists now checks to make sure that the volume is UP and ready
before returning.

Closes libopenstorage#1180

Signed-off-by: Luis Pabón <[email protected]>
lpabon added a commit to lpabon/openstorage that referenced this issue Aug 27, 2019
With this fix, any call creating a volume and noticing it already
exists now checks to make sure that the volume is UP and ready
before returning.

Closes libopenstorage#1180

Signed-off-by: Luis Pabón <[email protected]>
lpabon added a commit to lpabon/openstorage that referenced this issue Aug 30, 2019
With this fix, any call creating a volume and noticing it already
exists now checks to make sure that the volume is UP and ready
before returning.

Closes libopenstorage#1180

Signed-off-by: Luis Pabón <[email protected]>
lpabon added a commit to lpabon/openstorage that referenced this issue Sep 5, 2019
With this fix, any call creating a volume and noticing it already
exists now checks to make sure that the volume is UP and ready
before returning.

Closes libopenstorage#1180

Signed-off-by: Luis Pabón <[email protected]>
lpabon added a commit to lpabon/openstorage that referenced this issue Sep 5, 2019
With this fix, any call creating a volume and noticing it already
exists now checks to make sure that the volume is UP and ready
before returning.

Closes libopenstorage#1180

Signed-off-by: Luis Pabón <[email protected]>
lpabon added a commit to lpabon/openstorage that referenced this issue Sep 5, 2019
With this fix, any call creating a volume and noticing it already
exists now checks to make sure that the volume is UP and ready
before returning.

Closes libopenstorage#1180

Signed-off-by: Luis Pabón <[email protected]>

Conflicts:
	api/server/sdk/sdk_test.go
	api/server/sdk/volume_ops.go
	api/server/testutils_test.go
	csi/csi_test.go
lpabon added a commit to lpabon/openstorage that referenced this issue Sep 5, 2019
With this fix, any call creating a volume and noticing it already
exists now checks to make sure that the volume is UP and ready
before returning.

Closes libopenstorage#1180

Signed-off-by: Luis Pabón <[email protected]>

Conflicts:
	api/server/sdk/sdk_test.go
	api/server/sdk/volume_ops.go
	api/server/testutils_test.go
	csi/csi_test.go

Conflicts:
	api/server/sdk/volume_ops.go
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants