-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Workspace "from" clauses ensure task ordering #1936
Conversation
How would it behave if a pipeline has three or more tasks?
Would it enforce IMO, |
Hm, the way to ensure that currently would be to have task2 use
Yeah that's a good point, I agree. It would also be confusing because variables are normally strings, but here the user isn't able to interpolate anything but other workspaces (i.e. they can't use a param to inject a workspace name). I think I'm going to change this to use a From field. |
The following is the coverage report on pkg/.
|
The following is the coverage report on pkg/.
|
The following is the coverage report on pkg/.
|
The following is the coverage report on pkg/.
|
The following is the coverage report on pkg/.
|
I've rewritten this to use "from" syntax instead and I think it's largely ready for review now. Removed WIP label. |
The following is the coverage report on pkg/.
|
thanks @sbwsg, validated one more use case with workspace from second task
Follows the sequential order as specified:
/lgtm |
ah, thank you for checking it out @pritidesai! I think I might add an example that does this as well to make sure multiple "from"s chain correctly. |
The following is the coverage report on pkg/.
|
/lgtm Would also like to hear @skaegi's comments on this which we couldn't get to during the demo in working group. |
The following is the coverage report on pkg/.
|
Ah no, I missed the reconciler test. Adding today! |
Support for this is dictated by the volume I think. Some types don't support multiple simultaneous writers while some do. GKE's persistent volume implementation doesn't support multiple simultaneous writers. I believe the platform will complain through Pod errors if one tries to do that. We don't perform any copies of the data as we shuffle the PVCs around. |
The following is the coverage report on pkg/.
|
aaand reconciler tests have been added |
generateName: fib- | ||
spec: | ||
pipelineRef: | ||
name: horrible-fibonacci |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbwsg I think you need to change this to worlds-slowest-but-greatest-fibonacci
🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this pipelinerun is failing with:
kubectl get pipelinerun
NAME SUCCEEDED REASON STARTTIME COMPLETIONTIME
fib-pipelinerun False CouldntGetPipeline 26s 26s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbwsg I might not have pulled all your code to test with (because of the conflict), please ignore it if you are not hitting this but the pods are stuck in pending:
kubectl get pods
NAME READY STATUS RESTARTS AGE
fib-pipelinerun-init-1-5rglh-pod-zvznr 0/1 Pending 0 19m
fib-pipelinerun-init-2-nnq6v-pod-bq8xs 0/1 Pending 0 19m
with a message:
"message": "pod status \"PodScheduled\":\"False\"; message: \"pod has unbound immediate PersistentVolumeClaims\"",
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
amazing, great catch. I updated the pipelineref.
There are some remaining issues related to using regional PVCs in the example yaml. I'm getting volume node affinity errors when I run it locally and it appears that similar issues are occurring here in the CI cluster as well. Can't quite put my finger on what's going wrong but still looking into it.
/hold This feature is still a bit contentious and so I'm still in the process of discussing whether it's desirable / needed / a future-us problem / etc... |
Workspaces allow users to wire PVCs through the tasks of their pipelines. Unfortunately, multiple tasks all trying to mount a PVC at once can result in unpleasant conflicts. To combat the situation where multiple Task pods are fighting over a PVC, "from" clauses have been introduced for workspaces. One task can explicitly declare that it will use a workspace from a previous task. When Tekton sees a from clause linking one task's workspace to another it will ensure that the sequence of those tasks is enforced.
The following is the coverage report on pkg/.
|
Given that we're no longer going with "from" syntax for task results, and that the requirement for this feature remains under debate I'm going to close the pull request. We can reopen at some point if we decide this is more than a nice-to-have. |
OK we're going to revisit this PR but using a variable interpolation syntax instead. |
The following is the coverage report on pkg/.
|
@sbwsg: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Closing for now, intend to revisit at a later date. |
…ion 👷♀️ This is a super minor change to rename WorkspacePipelineDeclaration to PipelineWorkspaceDeclaration. WorkspacePipelineDeclaration sounds like it's a pipeline declaration inside of a workspace, but actually its meant to be a workspace declaration inside of a pipeline! I'm trying to pickup some of the work started in tektoncd#1936 and this seemed like a reasonable improvement to carve off and merge on its own :D Co-authored-by: Scott <[email protected]>
…ion 👷♀️ This is a super minor change to rename WorkspacePipelineDeclaration to PipelineWorkspaceDeclaration. WorkspacePipelineDeclaration sounds like it's a pipeline declaration inside of a workspace, but actually its meant to be a workspace declaration inside of a pipeline! I'm trying to pickup some of the work started in tektoncd#1936 and this seemed like a reasonable improvement to carve off and merge on its own :D Co-authored-by: Scott <[email protected]>
…ion 👷♀️ This is a super minor change to rename WorkspacePipelineDeclaration to PipelineWorkspaceDeclaration. WorkspacePipelineDeclaration sounds like it's a pipeline declaration inside of a workspace, but actually its meant to be a workspace declaration inside of a pipeline! I'm trying to pickup some of the work started in tektoncd#1936 and this seemed like a reasonable improvement to carve off and merge on its own :D Co-authored-by: Scott <[email protected]>
…ion 👷♀️ This is a super minor change to rename WorkspacePipelineDeclaration to PipelineWorkspaceDeclaration. WorkspacePipelineDeclaration sounds like it's a pipeline declaration inside of a workspace, but actually its meant to be a workspace declaration inside of a pipeline! I'm trying to pickup some of the work started in #1936 and this seemed like a reasonable improvement to carve off and merge on its own :D Co-authored-by: Scott <[email protected]>
Changes
Workspaces allow users to wire PVCs through the tasks of theirpipelines. Unfortunately, multiple tasks all trying to mount a PVC at once can result in unpleasant conflicts.
To combat the situation where multiple Task pods are fighting over a PVC, "from" clauses have been introduced for workspaces. One task can explicitly declare that it will use a workspace from a previous task. When Tekton sees a from clause linking one task's workspace to another it will ensure that the sequence of those tasks is enforced.
Here's a relevant snippet of YAML to show how this PR currently operates:
Here, task2 will run after task1 because task2 declares that its src workspace should be populated using whatever volume was used as the output workspace from task1.
Submitter Checklist
These are the criteria that every PR should meet, please check them off as you
review them:
See the contribution guide for more details.
Reviewer Notes
If API changes are included, additive changes must be approved by at least two OWNERS and backwards incompatible changes must be approved by more than 50% of the OWNERS, and they must first be added in a backwards compatible way.
Release Notes