- Overview
- Configuring a
Pipeline
A Pipeline
is a collection of Tasks
that you define and arrange in a specific order
of execution as part of your continuous integration flow. Each Task
in a Pipeline
executes as a Pod
on your Kubernetes cluster. You can configure various execution
conditions to fit your business needs.
A Pipeline
definition supports the following fields:
- Required:
apiVersion
- Specifies the API version, for exampletekton.dev/v1beta1
.kind
- Identifies this resource object as aPipeline
object.metadata
- Specifies metadata that uniquely identifies thePipeline
object. For example, aname
.spec
- Specifies the configuration information for thisPipeline
object. This must include:tasks
- Specifies theTasks
that comprise thePipeline
and the details of their execution.
- Optional:
resources
- alpha only SpecifiesPipelineResources
needed or created by theTasks
comprising thePipeline
.tasks
:resources.inputs
/resource.outputs
from
- Indicates the data for aPipelineResource
originates from the output of a previousTask
.
runAfter
- Indicates that aTask
should execute after one or more otherTasks
without output linking.retries
- Specifies the number of times to retry the execution of aTask
after a failure. Does not apply to execution cancellations.conditions
- SpecifiesConditions
that only allow aTask
to execute if they successfully evaluate.timeout
- Specifies the timeout before aTask
fails.
results
- Specifies the location to which thePipeline
emits its execution results.description
- Holds an informative description of thePipeline
object.finally
- Specifies one or moreTasks
to be executed in parallel after all other tasks have completed.
A Pipeline
requires PipelineResources
to provide inputs and store outputs
for the Tasks
that comprise it. You can declare those in the resources
field in the spec
section of the Pipeline
definition. Each entry requires a unique name
and a type
. For example:
spec:
resources:
- name: my-repo
type: git
- name: my-image
type: image
Workspaces
allow you to specify one or more volumes that each Task
in the Pipeline
requires during execution. You specify one or more Workspaces
in the workspaces
field.
For example:
spec:
workspaces:
- name: pipeline-ws1 # The name of the workspace in the Pipeline
tasks:
- name: use-ws-from-pipeline
taskRef:
name: gen-code # gen-code expects a workspace with name "output"
workspaces:
- name: output
workspace: pipeline-ws1
- name: use-ws-again
taskRef:
name: commit # commit expects a workspace with name "src"
runAfter:
- use-ws-from-pipeline # important: use-ws-from-pipeline writes to the workspace first
workspaces:
- name: src
workspace: pipeline-ws1
For more information, see:
- Using
Workspaces
inPipelines
- The
Workspaces
in aPipelineRun
code example
You can specify global parameters, such as compilation flags or artifact names, that you want to supply
to the Pipeline
at execution time. Parameters
are passed to the Pipeline
from its corresponding
PipelineRun
and can replace template values specified within each Task
in the Pipeline
.
Parameter names:
- Must only contain alphanumeric characters, hyphens (
-
), and underscores (_
). - Must begin with a letter or an underscore (
_
).
For example, fooIs-Bar_
is a valid parameter name, but barIsBa$
or 0banana
are not.
Each declared parameter has a type
field, which can be set to either array
or string
.
array
is useful in cases where the number of compilation flags being supplied to the Pipeline
varies throughout its execution. If no value is specified, the type
field defaults to string
.
When the actual parameter value is supplied, its parsed type is validated against the type
field.
The description
and default
fields for a Parameter
are optional.
The following example illustrates the use of Parameters
in a Pipeline
.
The following Pipeline
declares an input parameter called context
and passes its
value to the Task
to set the value of the pathToContext
parameter within the Task
.
If you specify a value for the default
field and invoke this Pipeline
in a PipelineRun
without specifying a value for context
, that value will be used.
Note: Input parameter values can be used as variables throughout the Pipeline
by using variable substitution.
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: pipeline-with-parameters
spec:
params:
- name: context
type: string
description: Path to context
default: /some/where/or/other
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: "$(params.context)"
The following PipelineRun
supplies a value for context
:
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
name: pipelinerun-with-parameters
spec:
pipelineRef:
name: pipeline-with-parameters
params:
- name: "context"
value: "/workspace/examples/microservices/leeroy-web"
Your Pipeline
definition must reference at least one Task
.
Each Task
within a Pipeline
must have a valid
name
and a taskRef
. For example:
tasks:
- name: build-the-image
taskRef:
name: build-push
You can use PipelineResources
as inputs and outputs for Tasks
in the Pipeline
. For example:
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-image
You can also provide Parameters
:
spec:
tasks:
- name: build-skaffold-web
taskRef:
name: build-push
params:
- name: pathToDockerFile
value: Dockerfile
- name: pathToContext
value: /workspace/examples/microservices/leeroy-web
If a Task
in your Pipeline
needs to use the output of a previous Task
as its input, use the optional from
parameter to specify a list of Tasks
that must execute before the Task
that requires their outputs as its
input. When your target Task
executes, only the version of the desired
PipelineResource
produced by the last Task
in this list is used. The
name
of this output PipelineResource
output must match the name
of the
input PipelineResource
specified in the Task
that ingests it.
In the example below, the deploy-app
Task
ingests the output of the build-app
Task
named my-image
as its input. Therefore, the build-app
Task
will
execute before the deploy-app
Task
regardless of the order in which those
Tasks
are declared in the Pipeline
.
- name: build-app
taskRef:
name: build-push
resources:
outputs:
- name: image
resource: my-image
- name: deploy-app
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: image
resource: my-image
from:
- build-app
If you need your Tasks
to execute in a specific order within the Pipeline
but they don't have resource dependencies that require the from
parameter,
use the runAfter
parameter to indicate that a Task
must execute after
one or more other Tasks
.
In the example below, we want to test the code before we build it. Since there
is no output from the test-app
Task
, the build-app
Task
uses runAfter
to indicate that test-app
must run before it, regardless of the order in which
they are referenced in the Pipeline
definition.
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: workspace
resource: my-repo
- name: build-app
taskRef:
name: kaniko-build
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
For each Task
in the Pipeline
, you can specify the number of times Tekton
should retry its execution when it fails. When a Task
fails, the corresponding
TaskRun
sets its Succeeded
Condition
to False
. The retries
parameter
instructs Tekton to retry executing the Task
when this happens.
If you expect a Task
to encounter problems during execution (for example,
you know that there will be issues with network connectivity or missing
dependencies), set its retries
parameter to a suitable value greater than 0.
If you don't explicitly specify a value, Tekton does not attempt to execute
the failed Task
again.
In the example below, the execution of the build-the-image
Task
will be
retried once after a failure; if the retried execution fails, too, the Task
execution fails as a whole.
tasks:
- name: build-the-image
retries: 1
taskRef:
name: build-push
To run a Task
only when certain conditions are met, it is possible to guard task execution using the when
field. The when
field allows you to list a series of references to WhenExpressions
.
The components of WhenExpressions
are Input
, Operator
and Values
:
Input
is the input for theWhenExpression
which can be static inputs or variables (Parameters
orResults
). If theInput
is not provided, it defaults to an empty string.Operator
represents anInput
's relationship to a set ofValues
. A validOperator
must be provided, which can be eitherin
ornotin
.Values
is an array of string values. TheValues
array must be provided and be non-empty. It can contain static values or variables (Parameters
orResults
).
The Parameters
are read from the Pipeline
and Results
are read directly from previous Tasks
. Using Results
in a WhenExpression
in a guarded Task
introduces a resource dependency on the previous Task
that produced the Result
.
The declared WhenExpressions
are evaluated before the Task
is run. If all the WhenExpressions
evaluate to True
, the Task
is run. If any of the WhenExpressions
evaluate to False
, the Task
is not run and the Task
is listed in the Skipped Tasks
section of the PipelineRunStatus
.
In these examples, first-create-file
task will only be executed if the path
parameter is README.md
and echo-file-exists
task will only be executed if the exists
result from check-file
task is yes
.
tasks:
- name: first-create-file
when:
- input: "$(params.path)"
operator: in
values: ["README.md"]
taskRef:
name: first-create-file
---
tasks:
- name: echo-file-exists
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
taskRef:
name: echo-file-exists
For an end-to-end example, see PipelineRun with WhenExpressions.
When WhenExpressions
are specified in a Task
, Conditions
should not be specified in the same Task
. The Pipeline
will be rejected as invalid if both WhenExpressions
and Conditions
are included.
There are a lot of scenarios where WhenExpressions
can be really useful. Some of these are:
- Checking if the name of a git branch matches
- Checking if the
Result
of a previousTask
is as expected - Checking if a git file has changed in the previous commits
- Checking if an image exists in the registry
- Checking if the name of a CI job matches
- Checking if an optional Workspace has been provided
Note: Conditions
are deprecated, use WhenExpressions
instead.
To run a Task
only when certain conditions are met, it is possible to guard task execution using
the conditions
field. The conditions
field allows you to list a series of references to
Condition
resources. The declared Conditions
are run before the Task
is run.
If all of the conditions successfully evaluate, the Task
is run. If any of the conditions fails,
the Task
is not run and the TaskRun
status field ConditionSucceeded
is set to False
with the
reason set to ConditionCheckFailed
.
In this example, is-master-branch
refers to a Condition resource. The deploy
task will only be executed if the condition successfully evaluates.
tasks:
- name: deploy-if-branch-is-master
conditions:
- conditionRef: is-master-branch
params:
- name: branch-name
value: my-value
taskRef:
name: deploy
Unlike regular task failures, condition failures do not automatically fail the entire PipelineRun
--
other tasks that are not dependent on the Task
(via from
or runAfter
) are still run.
In this example, (task C)
has a condition
set to guard its execution. If the condition
is not successfully evaluated, task (task D)
will not be run, but all other tasks in the pipeline
that not depend on (task C)
will be executed and the PipelineRun
will successfully complete.
(task B) — (task E)
/
(task A)
\
(guarded task C) — (task D)
Resources in conditions can also use the from
field to indicate that they
expect the output of a previous task as input. As with regular Pipeline Tasks, using from
implies ordering -- if task has a condition that takes in an output resource from
another task, the task producing the output resource will run first:
tasks:
- name: first-create-file
taskRef:
name: create-file
resources:
outputs:
- name: workspace
resource: source-repo
- name: then-check
conditions:
- conditionRef: "file-exists"
resources:
- name: workspace
resource: source-repo
from: [first-create-file]
taskRef:
name: echo-hello
You can use the Timeout
field in the Task
spec within the Pipeline
to set the timeout
of the TaskRun
that executes that Task
within the PipelineRun
that executes your Pipeline.
The Timeout
value is a duration
conforming to Go's ParseDuration
format. For example, valid values are 1h30m
, 1h
, 1m
, and 60s
.
Note: If you do not specify a Timeout
value, Tekton instead honors the timeout for the PipelineRun
.
In the example below, the build-the-image
Task
is configured to time out after 90 seconds:
spec:
tasks:
- name: build-the-image
taskRef:
name: build-push
timeout: "0h1m30s"
Tasks can emit Results
when they execute. A Pipeline can use these
Results
for two different purposes:
- A Pipeline can pass the
Result
of aTask
into theParameters
orWhenExpressions
of another. - A Pipeline can itself emit
Results
and include data from theResults
of its Tasks.
Sharing Results
between Tasks
in a Pipeline
happens via
variable substitution - one Task
emits
a Result
and another receives it as a Parameter
with a variable such as
$(tasks.<task-name>.results.<result-name>)
.
When one Task
receives the Results
of another, there is a dependency created between those
two Tasks
. In order for the receiving Task
to get data from another Task's
Result
,
the Task
producing the Result
must run first. Tekton enforces this Task
ordering
by ensuring that the Task
emitting the Result
executes before any Task
that uses it.
In the snippet below, a param is provided its value from the commit
Result
emitted by the
checkout-source
Task
. Tekton will make sure that the checkout-source
Task
runs
before this one.
params:
- name: foo
value: "$(tasks.checkout-source.results.commit)"
In the snippet below, a WhenExpression
is provided its value from the exists
Result
emitted by the
check-file
Task
. Tekton will make sure that the check-file
Task
runs before this one.
when:
- input: "$(tasks.check-file.results.exists)"
operator: in
values: ["yes"]
For an end-to-end example, see Task
Results
in a PipelineRun
.
A Pipeline
can emit Results
of its own for a variety of reasons - an external
system may need to read them when the Pipeline
is complete, they might summarise
the most important Results
from the Pipeline's
Tasks
, or they might simply
be used to expose non-critical messages generated during the execution of the Pipeline
.
A Pipeline's
Results
can be composed of one or many Task
Results
emitted during
the course of the Pipeline's
execution. A Pipeline
Result
can refer to its Tasks'
Results
using a variable of the form $(tasks.<task-name>.results.<result-name>)
.
After a Pipeline
has executed the PipelineRun
will be populated with the Results
emitted by the Pipeline
. These will be written to the PipelineRun's
status.pipelineResults
field.
In the example below, the Pipeline
specifies a results
entry with the name sum
that
references the outputValue
Result
emitted by the calculate-sum
Task
.
results:
- name: sum
description: the sum of all three operands
value: $(tasks.calculate-sum.results.outputValue)
For an end-to-end example, see Results
in a PipelineRun
.
You can connect Tasks
in a Pipeline
so that they execute in a Directed Acyclic Graph (DAG).
Each Task
in the Pipeline
becomes a node on the graph that can be connected with an edge
so that one will run before another and the execution of the Pipeline
progresses to completion
without getting stuck in an infinite loop.
This is done using:
from
clauses on thePipelineResources
used by eachTask
runAfter
clauses on the correspondingTasks
- By linking the
results
of oneTask
to the params of another
For example, the Pipeline
defined as follows
- name: lint-repo
taskRef:
name: pylint
resources:
inputs:
- name: workspace
resource: my-repo
- name: test-app
taskRef:
name: make-test
resources:
inputs:
- name: workspace
resource: my-repo
- name: build-app
taskRef:
name: kaniko-build-app
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-app-image
- name: build-frontend
taskRef:
name: kaniko-build-frontend
runAfter:
- test-app
resources:
inputs:
- name: workspace
resource: my-repo
outputs:
- name: image
resource: my-frontend-image
- name: deploy-all
taskRef:
name: deploy-kubectl
resources:
inputs:
- name: my-app-image
resource: my-app-image
from:
- build-app
- name: my-frontend-image
resource: my-frontend-image
from:
- build-frontend
executes according to the following graph:
| |
v v
test-app lint-repo
/ \
v v
build-app build-frontend
\ /
v v
deploy-all
In particular:
- The
lint-repo
andtest-app
Tasks
have nofrom
orrunAfter
clauses and start executing simultaneously. - Once
test-app
completes, bothbuild-app
andbuild-frontend
start executing simultaneously since they bothrunAfter
thetest-app
Task
. - The
deploy-all
Task
executes once bothbuild-app
andbuild-frontend
complete, since it ingestsPipelineResources
from both. - The entire
Pipeline
completes execution once bothlint-repo
anddeploy-all
complete execution.
The description
field is an optional field and can be used to provide description of the Pipeline
.
You can specify a list of one or more final tasks under finally
section. Final tasks are guaranteed to be executed
in parallel after all PipelineTasks
under tasks
have completed regardless of success or error. Final tasks are very
similar to PipelineTasks
under tasks
section and follow the same syntax. Each final task must have a
valid name
and a taskRef or
taskSpec. For example:
spec:
tasks:
- name: tests
taskRef:
Name: integration-test
finally:
- name: cleanup-test
taskRef:
Name: cleanup
Finally tasks can specify workspaces which PipelineTasks
might have utilized
e.g. a mount point for credentials held in Secrets. To support that requirement, you can specify one or more
Workspaces
in the workspaces
field for the final tasks similar to tasks
.
spec:
resources:
- name: app-git
type: git
workspaces:
- name: shared-workspace
tasks:
- name: clone-app-source
taskRef:
name: clone-app-repo-to-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
resources:
inputs:
- name: app-git
resource: app-git
finally:
- name: cleanup-workspace
taskRef:
name: cleanup-workspace
workspaces:
- name: shared-workspace
workspace: shared-workspace
Similar to tasks
, you can specify Parameters
in final tasks:
spec:
tasks:
- name: tests
taskRef:
Name: integration-test
finally:
- name: report-results
taskRef:
Name: report-results
params:
- name: url
value: "someURL"
With finally
, PipelineRun
status is calculated based on PipelineTasks
under tasks
section and final tasks.
Without finally
:
PipelineTasks under tasks |
PipelineRun status |
Reason |
---|---|---|
all PipelineTasks successful |
true |
Succeeded |
one or more PipelineTasks skipped and rest successful |
true |
Completed |
single failure of PipelineTask |
false |
failed |
With finally
:
PipelineTasks under tasks |
Final Tasks | PipelineRun status |
Reason |
---|---|---|---|
all PipelineTask successful |
all final tasks successful | true |
Succeeded |
all PipelineTask successful |
one or more failure of final tasks | false |
Failed |
one or more PipelineTask skipped and rest successful |
all final tasks successful | true |
Completed |
one or more PipelineTask skipped and rest successful |
one or more failure of final tasks | false |
Failed |
single failure of PipelineTask |
all final tasks successful | false |
failed |
single failure of PipelineTask |
one or more failure of final tasks | false |
failed |
Overall, PipelineRun
state transitioning is explained below for respective scenarios:
- All
PipelineTask
and final tasks are successful:Started
->Running
->Succeeded
- At least one
PipelineTask
skipped and rest successful:Started
->Running
->Completed
- One
PipelineTask
failed / one or more final tasks failed:Started
->Running
->Failed
Please refer to the table under Monitoring Execution Status to learn about
what kind of events are triggered based on the Pipelinerun
status.
Similar to tasks
, you can use PipelineResources as inputs and outputs for
final tasks in the Pipeline. The only difference here is, final tasks with an input resource can not have a from
clause
like a PipelineTask
from tasks
section. For example:
spec:
tasks:
- name: tests
taskRef:
Name: integration-test
resources:
inputs:
- name: source
resource: tektoncd-pipeline-repo
outputs:
- name: workspace
resource: my-repo
finally:
- name: clear-workspace
taskRef:
Name: clear-workspace
resources:
inputs:
- name: workspace
resource: my-repo
from: #invalid
- tests
It's not possible to configure or modify the execution order of the final tasks. Unlike Tasks
in a Pipeline
,
all final tasks run simultaneously and start executing once all PipelineTasks
under tasks
have settled which means
no runAfter
can be specified in final tasks.
Tasks
in a Pipeline
can be configured to run only if some conditions are satisfied using conditions
. But the
final tasks are guaranteed to be executed after all PipelineTasks
therefore no conditions
can be specified in
final tasks.
Final tasks can not be configured to consume Results
of PipelineTask
from tasks
section i.e. the following
example is not supported right now but we are working on adding support for the same (tracked in issue
#2557).
spec:
tasks:
- name: count-comments-before
taskRef:
Name: count-comments
- name: add-comment
taskRef:
Name: add-comment
- name: count-comments-after
taskRef:
Name: count-comments
finally:
- name: check-count
taskRef:
Name: check-count
params:
- name: before-count
value: $(tasks.count-comments-before.results.count) #invalid
- name: after-count
value: $(tasks.count-comments-after.results.count) #invalid
Final tasks can emit Results
but results emitted from the final tasks can not be configured in the
Pipeline Results. We are working on adding support for this
(tracked in issue #2710).
results:
- name: comment-count-validate
value: $(finally.check-count.results.comment-count-validate)
In this example, PipelineResults
is set to:
"pipelineResults": [
{
"name": "comment-count-validate",
"value": "$(finally.check-count.results.comment-count-validate)"
}
],
For a better understanding of Pipelines
, study our code examples.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.