-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipeline level sidecar #2973
Comments
As of today, this can be achieve by using a
I feel this "should" be handled by integration like
This is indeed a use case that we have / will have with the hub (cc @sthaha).
I am not sure I got this one. The idea would be to make sure a docker registry runs when the pipeline runs ? (so same as the |
Another crazy idea: make pipeline-level heartbeats a first-class thing - maybe through cloudevents. |
re: docker / registry sidecar Use Case... One thing we see frequently is in response to a git commit, a single pipeline is run that splits into into separate Tasks used to both build sub-component images and sanity/unit test them concurrently. A later join Task decides if everything is in order and if so finally tags and pushes the images to a final pre-product image registry. We see this done with a Task-level sidecar docker daemon and a remote image registry. It might be good to have one dedicated Sidecar/Task to run a docker daemon the other Tasks share and use a pipeline scope image registry until ready to do the final push. -- |
I think a pipeline level sidecar object make sense for those use cases. Ideally it would be if we are able to specify a service attached to it so other tasks/pods can access it easily. |
If we go ahead with this, some questions to think through:
|
These are useful questions. I think the level of customization required might make it easier to just recommend that people start a Service at the beginning of a Pipeline and tear it down in a |
Yeah I think the only downside to that pattern is then that the Tasks at the start and cleanup have to run with enough RBAC to create the required k8s objects... |
I wonder if the feature that would work would be less about being a "sidecar" and more about tying the lifecycle of to a Pipeline - e.g. a kubernetes resource you "create" at the beginning of a pipeline and "delete" at the end? |
That makes sense - stuff using "generateName" could be easily |
For the boskos leasing, Pipeline-level sidecar is a bit too wide... could we define a sidecar for a subset of Tasks? Effectively, once all of N listed Tasks are complete, Task X should be killed. A simpler primitive is to allow an anti-dependency. Today if Task 2 requires input from Task 1, it doesn't start executing until Task 1 is complete. Anti-dependency would be if Task 2 has anti-dependency on Task 1, it would start normally but be killed when Task 1 is complete. For Boskos heartbeat, there'd be 3 Tasks: Once |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
/remove-lifecycle stale |
I feel like we haven't had much interest in this one so I'd be happy to close it and re-open it if needed some day in the future @vdemeester |
Issues go stale after 90d of inactivity. /lifecycle stale Send feedback to tektoncd/plumbing. |
Stale issues rot after 30d of inactivity. /lifecycle rotten Send feedback to tektoncd/plumbing. |
Rotten issues close after 30d of inactivity. /close Send feedback to tektoncd/plumbing. |
@tekton-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Feature request
In addition to letting Tasks specify sidecars https://github.com/tektoncd/pipeline/blob/master/docs/tasks.md#specifying-sidecars, which are containers that run alongside a Tasks's steps and the lifecycle of which is managed by the Tekton controller, we allow Pipelines to specify sidecars as well.
We could implement these as pods, which the Tekton controller starts when execution of the Pipeline starts and stops when the Pipeline completes (probably after the finally Tasks? not sure). It probably doesn't make sense to specify them as Tasks b/c Tasks are designed to run to completion and this would run as long as a Pipeline runs.
If the pod stopped with an error, we'd need to decide if we want to restart it or fail the Pipeline (maybe make that configurable?).
Use case
In tektoncd/catalog#408 I'm creating Tasks to lease and return Boskos resources. A regular heartbeat needs to be sent to Boskos while the resource is being used; if Boskos doesn't get a heartbeat it will assume the resource is no longer in use, clean it up and return it to its pool.
I've implemented this by having the Task that leases the resource create a pod and the Task the returns the resource stops the pod.
Other use cases
A few more use cases were mentioned in the API working group:
The text was updated successfully, but these errors were encountered: