Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to select Deployment Pod Storage driver #11899

Closed
richard-cox opened this issue Sep 12, 2024 · 1 comment · Fixed by #12682
Closed

Unable to select Deployment Pod Storage driver #11899

richard-cox opened this issue Sep 12, 2024 · 1 comment · Fixed by #12682
Assignees
Labels
kind/bug QA/dev-automation Issues that engineers have written automation around so QA doesn't have look at this
Milestone

Comments

@richard-cox
Copy link
Member

Setup

  • Rancher version: 2.9.1

Describe the bug

  • When creating or editing a deployment the pod's storage csi driver cannot be specified

To Reproduce

  • Cluster Explorer --> Workloads --> Deployments --> Create --> Pod tab --> Storage --> Select CSI

Result
image

Expected Result

  • List of acceptable values
  • Note though, the options supplied to the component is [ "driver.longhorn.io", "edit/workload/storage/csi/driver.longhorn.io", "edit/workload/storage/csi/index" ], first two make sense, not last one...

Additional context

@richard-cox richard-cox added this to the v2.11.0 milestone Sep 12, 2024
@github-actions github-actions bot added the QA/dev-automation Issues that engineers have written automation around so QA doesn't have look at this label Sep 12, 2024
@nwmac nwmac modified the milestones: v2.12.0, v2.11.0 Nov 1, 2024
@rak-phillip rak-phillip self-assigned this Nov 15, 2024
@yonasberhe23
Copy link
Contributor

e2e test coverage is sufficient. moving to done

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug QA/dev-automation Issues that engineers have written automation around so QA doesn't have look at this
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants