-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenShift + Pipeline on GCP is broken #1742
Comments
To summarize, so that I'm sure I understand the issue:
Is that correct? One solution would be to not use |
@imjasonh I think this is true even without the entrypoint magic as k8schain is also used in Created google/go-containerregistry#630 upstream 👼 |
Yes, https://github.com/kubernetes/kubernetes/blob/master/pkg/credentialprovider/gcp/metadata.go#L239 blocks (as it loop forever with backoff), and thus the rest of the code never get executed (and that means, for the controller, it is never ready to reconcile anything 😅 ). |
Ah okay, so just importing the magic import causes the controller to block forever, when installed on OpenShift-on-GCP. Is that correct? These magic imports seem like more trouble than they're worth to be honest. 👿 Did this work until recently? AFAIK we've had an indirect dependency on the magic imports for quite a while. |
We only tried that recently on GCP so… I am guessing it never worked before. It is the same for Knative by the way. Yeah I am really not a huge fan of magic import and the use of |
I can confirm that with master...vdemeester:k8schain-quick-fix and
|
@imjasonh This is not technically OpenShift specific either. We've had reports in Knative of other managed Kubernetes services on GCP hitting this same issue. Basically, anyone that can hit that metadata URL can gain credentials that a random user on a K8s cluster shouldn't necessarily be able to get. That's why OpenShift and other managed K8s distros block that metadata URL from pods in the cluster unless the pods are running with host networking. |
upstream issue : kubernetes/kubernetes#86245 |
This can be consider complete as #1882 has been merged, so a build tag to set and we are good to go. |
As we are tracking that downstream and the required bump of /close |
@vdemeester: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Expected Behavior
Deploying Tekton Pipeline on an OpenShift cluster running in GCP should work.
Actual Behavior
Deploying Tekton Pipeline on an OpenShift cluster running in GCP should work.
With a OCP 4.2 cluster installed in GCP and RH OpenShift Piplenise Operator 0.8.0, I see the creation of a runtime object (TaskRun or PipelineRun) does not create any resources like pods. When checking the pipeline controller log it doesn't show anything. The controller is actually looping forever.
Quoting @bbrowning
The main issue is with the
"k8s.io/kubernetes/pkg/credentialprovider/gcp"
import, and what is magically happening there, especially here. This metadat URL is being disallowed by OpenShift and thus this loops for ever (with backofff, but still).Steps to Reproduce the Problem
Additional Info
One easy way to fix it, would be to put the following magic import behind build tags (upstream in go-containerregistry)
/assign
/kind bug
The text was updated successfully, but these errors were encountered: