-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to deploy driver - Failed getting project and zone #490
Comments
Which version of the driver are you using? Also, the error message is cut off, could you paste the entire "Failed to get cloud provider" error? |
I see you are using v0.7.0. |
correct - installing the alpha version for snapshot feature support. Full output.. I0417 16:33:29.986931 1 main.go:67] Driver vendor version v0.7.0-gke.0 |
Do you have |
Nope, I do not. Do I need to add that in the spec somewhere? |
Yes see
|
I tried that and still facing the same issue. |
Not sure what's GCP metadata server... Is it the link-local address to get VM metadata? The DaemonSet pods must use OpenShift does not allow random pods to get to VM metadata, we used to put some sensitive material (don't remember exactly, some certificates?). |
There is nothing else OpenShift specific... |
Yes I mean link local address to get vm metadata: 169.254.169.254:80: connect: connection refused |
@msau42 have we been able to confirm that non-OCP environments are not having this issue? |
Yes, we have CI running successfully in kubetest GCP environment: @gnufied @jsafrane are you able to run the pd driver in your ocp environment? |
Yes, I am able to run e2e tests with manifests from https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/storage-csi/gce-pd on GCP. |
1 similar comment
Yes, I am able to run e2e tests with manifests from https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/storage-csi/gce-pd on GCP. |
I'm reading another issue but from an AWS project with similar issues being faced.. Also, I am hitting similar metadata issues when using EC2 with OCP and EBS CSI Driver. |
@msau42 |
Are you trying to run the controller on a node that doesn't have access to the metadata service? |
There is no common code path between the two drivers, but the ideas are similar. They both require access to the metadata service in order to get project/zone information of the cluster they're running in. There was work being done in both drivers to remove this requirement and allow controllers to be run outside of the Kubernetes cluster. But it requires additional arguments to be passed into the driver, and is not the normal case. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I've double checked all credentials, but not sure why I keep hitting this issue for any version I deploy - stable or alpha.
Any idea what could be going wrong here?
The text was updated successfully, but these errors were encountered: