diff --git a/content/doc/book/pipeline/getting-started.adoc b/content/doc/book/pipeline/getting-started.adoc index ec9618253cae..343173474922 100644 --- a/content/doc/book/pipeline/getting-started.adoc +++ b/content/doc/book/pipeline/getting-started.adoc @@ -104,6 +104,9 @@ node { // <1> <2> `echo` writes simple string in the Console Output. +// Despite :sectanchors:, explicitly defining an anchor because it will be +// referenced from other documents +[[defining-a-pipeline-in-scm]] === Defining a Pipeline in SCM Complex pipelines would be cumbersome to write and maintain if you could only do @@ -116,8 +119,8 @@ Script from SCM* option enabled by the workflow-scm-step plugin, which is one of the plugins that the Pipeline plugin depends on and automatically installs. Loading pipeline scripts using the `checkout scm` step leverages the -idea of "pipeline as code," and lets you maintain pipelines using version -control and standalone Groovy editors. +idea of "Pipeline as code" and allows easy maintenance of Pipelines with source +control and text-editors. To do this, select *Pipeline script from SCM* when defining the pipeline. @@ -151,6 +154,7 @@ configured Pipeline project. image::pipeline-syntax-sidebar.png[Pipeline Syntax in the side-bar, role=center] +[[snippet-generator]] === Snippet Generator The built-in "Snippet Generator" utility is helpful for creating bits of diff --git a/content/doc/book/pipeline/jenkinsfile.adoc b/content/doc/book/pipeline/jenkinsfile.adoc index e51ab5a81401..b96ca32cbbb7 100644 --- a/content/doc/book/pipeline/jenkinsfile.adoc +++ b/content/doc/book/pipeline/jenkinsfile.adoc @@ -6,201 +6,248 @@ layout: section :author: :email: jenkinsci-docs@googlegroups.com :sectanchors: -:toc: left +:toc: +:hide-uri-scheme: = The Jenkinsfile +This section builds on the information covered in <>, +and introduces more useful steps, common patterns and demonstrates some +non-trivial `Jenkinsfile` examples. -//// -XXX: Still much reworking of this section to be done -22:00 < rtyler> then I think the other pieces (recording test results, archiving artifacts, parallel, etc) should move over into the Jenkinsfile section of the chapter -22:00 < rtyler> whatcha think bitwiseman (and hrmpw if he's floating about) -22:02 < bitwiseman> References: yeah, that make sense. -//// +Creating a `Jenkinsfile`, which is checked into source control +footnoteref:[scm, https://en.wikipedia.org/wiki/Source_control_management], +provides a number of immediate benefits: +* Code review/iteration on the Pipeline +* Audit trail for the Pipeline +* Single source of truth + footnote:[https://en.wikipedia.org/wiki/Single_Source_of_Truth] + for the Pipeline, which can be viewed and edited by multiple members of the project. -== Audience and Purpose +While the syntax for defining a Pipeline either in the web UI or with a +`Jenkinsfile`, it's generally considered best practice to define the Pipeline +in a `Jenkinsfile` and check that in to source control. -This document is intended for Jenkins users who want to leverage the power of -pipeline functionality. Extending the reach of what was learned from a "Hello -World" example in link:/doc/pipeline/[Getting Started with Pipeline], this -document explains how to use a `Jenkinsfile` to perform a simple checkout and -build for the contents of a repository. == Creating a Jenkinsfile -A `Jenkinsfile` is a container for your pipeline (or other) script, which details -what specific steps are needed to perform a job for which you want to use -Jenkins. You create a `Jenkinsfile` with your preferred Groovy editor, or through -the configuration page on the web interface of your Jenkins instance. +As discussed in the <> +section, a `Jenkinsfile` is a text file that contains the definition of a +Jenkins Pipeline and is checked into source control. Consider the following +Pipeline which implements a basic, three-stage, continuous delivery pipeline. -Using a Groovy editor to code a `Jenkinsfile` gives you more flexibility for -building complex single or multibranch pipelines, but whether you use an editor -or the Jenkins interface does not matter if what you want to do is get familiar -with basic `Jenkinsfile` content. - - -. Open your Jenkins instance or Groovy editor. -. Navigate to the directory you want (it should be the root directory for your project). -. Use standard Jenkins syntax. -. Save your file. - -The following example shows a basic `Jenkinsfile` made to build and test code for -a Maven project. `node` is the step that schedules tasks in the following block -to run on the machine (usually an agent) that matches the label specified in the -step argument (in this case, a node called "linux"). Code between the braces ( -`{` and `}` ) is the body of the `node` step. The `checkout scm` command -indicates that this `Jenkinsfile` was created with an eye toward multibranch -support: - - -[source,groovy] +[pipeline] ---- - node ('linux'){ - stage 'Build and Test' - env.PATH = "${tool 'Maven 3'}/bin:${env.PATH}" - checkout scm - sh 'mvn clean package' - } +// Script // +node { // <1> + stage('Build') { // <2> + /* .. snip .. */ + } + stage('Test') { + /* .. snip .. */ + } + stage('Deploy') { + /* .. snip .. */ + } +} ---- +<1> `node` allocates an executor and workspace on the Jenkins cluster. +<2> `stage` describes distinct parts of the Pipeline for better visualization of progress/status. + +Not all Pipelines will have these same three stages, but this is a good +continuous delivery starting point to define them for most projects. The +following can be followed to create and execute a simple Pipeline in a +local test installation of Jenkins. + +[NOTE] +==== +It is assumed that there is already a source control repository set up for +project and a Pipeline has been defined in Jenkins following +<>. +==== + +Using a text editor, ideally one which supports +link:http://groovy-lang.org[Groovy] +syntax highlighting, create a new `Jenkinsfile` in the root directory of the +project. + + +In the example above, `node` is a crucial first step as it allocates an +executor and workspace for the Pipeline. In essence, without `node`, a Pipeline +cannot do any work! From within `node`, the first order of business will be to +checkout the source code for this project. Since the `Jenkinsfile` is being +pulled directly from source control, Pipeline provides a quick and easy way to +access the right revision of the source code -In single-branch contexts, you could replace .checkout scm. with a source code -checkout step that calls a particular repository, such as: - - -[source,groovy] +[pipeline] ---- - -git url: "https://github.com/my-organization/simple-maven-project-with-tests.git" +// Script // +node { + checkout scm // <1> + /* .. snip .. */ +} ---- +<1> The `checkout` step will checkout code from source control; `scm` is a +special variable which instructs the `checkout` step to clone the specific +revision which triggered this Pipeline run. -== Basic Syntax for Pipeline Script +=== Build -You typically add functionality to a new pipeline by performing the following tasks: - -* Adding nodes -* Adding more complex logic (usually expressed as stages and steps) - -To configure a pipeline you have created through the Jenkins UI, select the -pipeline and click *Configure*. - -If you run Jenkins on Linux or another Unix-like operating system with a Git -repository that you want to test, for example, you can do that with syntax like -the following, substituting your own name for `jglick`: +For many projects the beginning of "work" in the Pipeline would be the "build" +stage. Typically this stage of the Pipeline will be where source code is +assembled, compiled, or packaged. The `Jenkinsfile` is *not* a replacement for an +existing build tool such as GNU/Make, Maven, Gradle, etc, but rather can be +viewed as a glue layer to bind the multiple phases of a project's development +lifecycle (build, test, deploy, etc) together. +Jenkins has a number of plugins for invoking practically any build tool in +general use, but this example will simply invoke `make` from a shell step +(`sh`). The `sh` step assumes the system is Unix/Linux-based, for +Windows-based systems the `bat` could be used instead. [pipeline] ---- // Script // node { - git url: 'https://github.com/jglick/simple-maven-project-with-tests.git' - def mvnHome = tool 'M3' - sh "${mvnHome}/bin/mvn -B verify" + /* .. snip .. */ + stage('Build') { + sh 'make' // <1> + archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true // <2> + } + /* .. snip .. */ } ---- +<1> The `sh` step invokes the `make` command and will only continue if a +zero exit code is returned by the command. Any non-zero exit code will fail the +Pipeline. +<2> `archiveArtifacts` captures the files built matching the include pattern +(`**/target/*.jar`) and saves them to the Jenkins master for later retrieval. + + +[CAUTION] +==== +Archiving artifacts is not a substitute for using external artifact +repositories such as Artifactory or Nexus and should be considered only for +basic reporting and file archival. +==== + +=== Test + +Running automated tests is a crucial component of any successful continuous +delivery process. As such, Jenkins has a number of test recording, reporting, +and visualization facilities provided by a +link:https://plugins.jenkins.io/?labels=report[number of plugins]. +At a fundamental level, when there are test failures, it is useful to have +record them reporting and visualization. The example below uses the `junit` +step, provided by the +link:https://plugins.jenkins.io/junit[JUnit plugin]. + +In the example below, if tests fail, the Pipeline is marked "unstable", as +denoted by a yellow ball in the web UI. Based on the recorded test reports, +Jenkins can also provide historical trend analysis and visualization. -In Windows environments, you would use `bat` in place of `sh`, for example, -rather than: - -[source, groovy] ----- -sh "${mvnHome}/bin/mvn -B verify" ----- - -you would use: - -[source, groovy] +[pipeline] ---- -bat "${mvnHome}/bin/mvn -B verify" +// Script // +node { + /* .. snip .. */ + stage('Test') { + /* `make check` returns non-zero on test failures, + * using `true` to allow the Pipeline to continue nonetheless + */ + sh 'make check || true' // <1> + junit '**/target/*.xml' // <2> + } + /* .. snip .. */ +} ---- +<1> Using an inline shell conditional (`sh 'make || true'`) ensures that the +`sh` step always sees a zero exit code, giving the `junit` step the opportunity +to capture and process the test reports. Alternative approaches to this are +covered in more detail in the <> section below. +<2> `junit` captures and associates the JUnit XML files matching the inclusion +pattern (`**/target/*.xml`). -Your Groovy pipeline script can include functions, conditional tests, loops, -try/catch/finally blocks, and so on. +=== Deploy -Sample syntax for one node in a Java environment that is using the open source -Maven build automation tool (note the definition for `mvnHome`) is shown below: +Deployment can imply a variety of steps, depending on the project or +organization requirements, and may be anything from publishing built artifacts +to an Artifactory server, or pushing code to a production system. +At this stage of the example Pipeline, both the "Build" and "Test" stages have +successfully executed. In essense, the "Deploy" stage will only execute +assuming previous stages completed successfully, otherwise the Pipeline would +have exited early. [pipeline] ---- // Script // -node('remote') { - git url: 'https://github.com/jglick/simple-maven-project-with-tests.git' - - def mvnHome = tool 'M3' // <1> - def v = version() - - if (v) { - echo "Building version ${v}" +node { + /* .. snip .. */ + stage('Deploy') { + if (currentBuild.result == 'SUCCESS') { // <1> + sh 'make publish' + } } - - sh "${mvnHome}/bin/mvn -B -Dmaven.test.failure.ignore verify" - - archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true - junit '**/target/surefire-reports/TEST-*.xml' -} - -/** Parse the pom.xml for the version number */ -def version() { - def matcher = readFile('pom.xml') =~ '(.+)' - if (matcher) { - return matcher[0][1] - } - return null + /* .. snip .. */ } ---- +<1> Accessing the `currentBuild.result` variable allows the Pipeline Script to +determine if there were any test failures. In which case, the value would be +`UNSTABLE`. -Pipeline Sample (graphic) key: +Assuming everything has executed successfully in the example Jenkins Pipeline, +each successful Pipeline run will have associated build artifacts archived, +test results reported upon and the full console output all in Jenkins. -* `def` is a keyword to define a function (you can also give a Java type in - place of `def` to make it look more like a Java method) -* `=~` is Groovy syntax to match text against a regular expression -* [0] looks up the first match -* [1] looks up the first (…) group within that match -* `readFile` step loads a text file from the workspace and returns its content - (Note: Do not use `java.io.File` methods, these refer to files on the master - where Jenkins is running, not files in the current workspace). +A Pipeline Script can include conditional tests (shown above), loops, +try/catch/finally blocks and even functions. The next section will cover this +more advanced Pipeline Script syntax in more detail. -The tool step makes sure a tool with the given name is installed on the current -node. The script needs to know where it was installed, so the tool can be run -later. For this, you need a variable. -The `def` keyword in Groovy is the quickest way to define a new variable (with no specific type). +== Advanced Syntax for Pipeline Scripts -In the sample syntax discussed above, a variable is defined by the following expression: +Pipeline Script is a domain-specific language +footnoteref:[dsl, https://en.wikipedia.org/wiki/Domain-specific_language] +based on Groovy, therefore much of +link:http://groovy-lang.org/semantics.html[Groovy syntax] +can be used without further consideration in Pipeline Script. +=== String Interpolation -[source, groovy] +Groovy's "String" interpolation support can be confusing to many new-comers to +the language. While Groovy supports declaring a string with either single quotes, or +double quotes, for example: + +[source,groovy] ---- -def mvnHome = tool 'M3' +def singlyQuoted = 'Hello' +def doublyQuoted = "World" ---- -This ensures that 'M3' is installed somewhere accessible to Jenkins and assigns -the return value of the step (an installation path) to the `mvnHome` variable. - -== Advanced Syntax for Pipeline Script +Only the latter string will support the dollar-sign (`$`) based string +interpolation, for example: -Groovy lets you omit parentheses around function arguments. The named-parameter -syntax is also a shorthand for creating a map, which in Groovy uses the syntax -`[key1: value1, key2: value2]`, so you could write: - - -[source, groovy] +[source,groovy] ---- -git([url: 'https://github.com/joe_user/simple-maven-project-with-tests.git', branch: 'master']) +def username = 'Jenkins' +echo 'Hello Mr. ${username}' +echo "I said, Hello Mr. ${username}" ---- -For convenience, when calling steps taking only one parameter (or only one -mandatory parameter), you can omit the parameter name. For example, the -following two lines are functionally equivalent: +Would result in: -[source, groovy] +[source] ---- -sh 'echo hello' /* short form */ -sh([script: 'echo hello']) /* long form */ +Hello Mr. ${username} +I said, Hello Mr. Jenkins ---- +Understanding how to use Groovy's string interpolation is vital for using some +of Pipeline Script's more advanced features. + === Working with the Environment Jenkins Pipeline exposes environment variables via the global variable `env`, @@ -214,8 +261,6 @@ JOB_NAME:: Name of the project of this build, such as "foo" or "foo/bar". JENKINS_URL:: Full URL of Jenkins, such as http://example.com:port/jenkins/ (NOTE: only available if Jenkins URL set in "System Configuration") - - Referencing or using these environment variables can be accomplished like accessing any key in a Groovy link:http://groovy-lang.org/syntax.html#_maps[Map], @@ -254,113 +299,174 @@ If you configured your pipeline to accept parameters using the *Build with Parameters* option, those parameters are accessible as Groovy variables of the same name. + +Assuming that a String parameter named "Greeting" has been configured for the +Pipeline project in the web UI, a `Jenkinsfile` can access that parameter via +`$Greeting`: + +[pipeline] +---- +// Script // +node { + echo "${Greeting} World!" +} +---- + ///// TODO: Expand this section with more examples ///// -=== Recording Test Results and Artifacts +=== Handling Failures + +Pipeline Script relies on Groovy's built-in `try`/`catch`/`finally` semantics +for handling failures during execution of the Pipeline. -If there are any test failures in a given build, you want Jenkins to record -them and then proceed, rather than stopping. If you want it saved, you must -capture the `.jar` that you built. The following sample code for a node shows how -(As previously seen in examples from this guide, Maven is being used as -a build tool): +In the <> example above, the `sh` step was modified to never return a +non-zero exit code (`sh 'make check || true'`). This approach, while valid, +means the following stages need to check `currentBuild.result` to know if +there has been a test failure or not. + +An alternative way of handling this, which preserves the early-exit behavior of +failures in Pipeline, while still giving `junit` the chance to capture test +reports, is to use a series of `try`/`finally` blocks: [pipeline] ---- // Script // node { /* .. snip .. */ - archiveArtifacts artifacts: '**/target/*.jar', fingerprint: true - junit '**/target/surefire-reports/TEST-*.xml' + stage('Test') { + try { + sh 'make check' + } + finally { + junit '**/target/*.xml' + } + } + /* .. snip .. */ } ---- -(Older versions of Pipeline require a slightly more verbose syntax. -The “snippet generator” can be used to see the exact format.) - -* If tests fail, the Pipeline is marked unstable (as denoted by a yellow ball in - the Jenkins Web UI), and you can browse "Test Result Trend" to see the relevant history. -* You should see Last Successful Artifacts on the Pipeline's main page. - - -== Making Pull Requests +=== Using multiple nodes -A pull request notifies the person responsible for maintaining a Jenkins -repository that you have a change or change set that you want to see merged into -the main branch associated with that repository. Each individual change is -called a "commit." +In all previous uses of the `node` step, it has been used without any +arguments. This means Jenkins will allocate an executor wherever one is +available. The `node` step can take an optional "label" parameter, which is +helpful for more advanced use-cases such as executing builds/tests across +multiple platforms. -You make pull requests from a command line, or by selecting the appropriately -labeled button (typically "Pull" or "Create Pull Request") in the interface for -your source code management system. +In the example below, the "Build" stage will be performed on one node and +the built results will be reused on two different nodes, labelled "linux" and +"windows" respectively, during the "Test" stage. -A pull request to a repository included in or monitored by an Organization -Folder can be used to automatically execute a multibranch pipeline build. +[pipeline] +---- +// Script // +stage('Build') { + node { + checkout scm + sh 'make' + stash includes: '**/target/*.jar', name: 'app' // <1> + } +} +stage('Test') { + node('linux') { // <2> + checkout scm + try { + unstash 'app' // <3> + sh 'make check' + } + finally { + junit '**/target/*.xml' + } + } + node('windows') { + checkout scm + try { + unstash 'app' + bat 'make check' // <4> + } + finally { + junit '**/target/*.xml' + } + } +} +---- +<1> The `stash` step allows capturing files matching an inclusion pattern +(`**/target/*.jar`) for reuse within the _same_ Pipeline. Once the Pipeline has +completed its execution, stashed files are deleted from the Jenkins master. +<2> The optional parameter to `node` allows for any valid Jenkins label +expression. Consult the inline help for `node` in the <> for more details. +<3> `unstash` will retrieve the named "stash" from the Jenkins master into the +Pipeline's current workspace. +<4> The `bat` script allows for executing batch scripts on Windows-based +platforms. -== Using Organization Folders +=== Executing in parallel -Organization folders enable Jenkins to automatically detect and include any new -repositories within them as resources. +The example in the <> runs tests across two +different platforms in a linear series. In practice, if the `make check` +execution takes 30 minutes to complete, the "Test" stage would now take 60 +minutes to complete! -When you create a new repository (as might be the case for a new project), that -repository has a `Jenkinsfile`. If you also configure one or more organization -folders, Jenkins automatically detects any repository in an organization folder, -scans the contents of that repository at either default or configurable -intervals, and creates a Multibranch Pipeline project for what it finds in the -scan. An organization folder functions as a "parent," and any item within it is -treated as a "child" of that parent. +Fortunately, Pipeline has built-in functionality for executing portions of +Pipeline Script in parallel, implemented in the aptly named `parallel` step. -Organization folders alleviate the need to manually create projects for new -repositories. When you use organization folders, Jenkins views your repositories -as a hierarchy, and each repository (organization folder) may optionally have -child elements such as branches or pull requests. +Refactoring the example above to use the `parallel` step: +[pipeline] +---- +// Script // +stage('Build') { + /* .. snip .. */ +} -To create Organization folders: +stage('Test') { + parallel linux: { + node('linux') { + checkout scm + try { + unstash 'app' + sh 'make check' + } + finally { + junit '**/target/*.xml' + } + } + }, + windows: { + node('windows') { + /* .. snip .. */ + } + } +} +---- -. Open Jenkins in your web browser. -. Go to: New Item → GitHub Organization or New Item → Bitbucket Team. -. Follow the configuration steps, making sure to specify appropriate scan - credentials and a specific owner for the GitHub Organization or Bitbucket Team - name. -. Set build triggers by selecting the checkbox associated with the trigger type - you want. Folder scans and the pipeline builds associated with those scans can - be initiated by command scripts or performed at defined intervals. They can also - triggered by project promotion or changes to the images in a monitored Docker - hub. -. Decide whether to automatically remove or retain unused items. "Orphaned Item - Strategy" fields in the configuration interface let you specify how many days to - keep old items, and how many old items to keep. If you enter no values in these - fields, unused items are removed by default. +Instead of executing the tests on the "linux" and "windows" labelled nodes in +series, they will now execute in parallel assuming the requisite capacity +exists in the Jenkins cluster. -While configuring organization folders, you can set the following options: -* Repository name pattern - a regular expression to specify which repositories are included in scans -* API endpoint - an alternate API endpoint to use a self-hosted GitHub Enterprise -* Checkout credentials - alternate credentials to use when checking out (cloning) code +=== Optional step arguments -Multibranch Pipeline projects and Organization Folders are examples of -"computed folder" functionality. In Multibranch Pipeline projects, computation -creates child items for eligible branches. In Organization folders, computation -populates child items as individual Multibranch Pipelines for scanned -repositories. +Groovy allows parentheses around function arguments to be omitted. -Select the "Folder Computation" section of your Jenkins interface to see the -duration (in seconds) and result (success or failure) of computation operations, -or to access a Folder Computation Log that provides more detail about this -activity. +Many Pipeline steps also use the named-parameter syntax as a shorthand for +creating a Map in Groovy, which uses the syntax `[key1: value1, key2: value2]`. +Making statements like the following functionally equivalent: -== Basic Checkout and Build +[source, groovy] +---- +git url: 'git://example.com/amazing-project.git', branch: 'master' +git([url: 'git://example.com/amazing-project.git', branch: 'master']) +---- -Checkout and build command examples are shown in the code example used by the -introduction above. Examples shown assume that Jenkins is running on Linux or -another Unix-like operating system. +For convenience, when calling steps taking only one parameter (or only one +mandatory parameter), the parameter name may be omitted, for example: -If your Jenkins server or agent is running on Windows, you are less likely to be -using the Bourne shell (`sh`) or -link:http://www.computerhope.com/unix/ubash.htm[Bourne-Again shell] (`bash`) as -a command language interpreter for starting software builds. In Windows -environments, use `bat` in place of `sh`, and backslashes (`\`) rather than -slashes as file separators in pathnames. +[source, groovy] +---- +sh 'echo hello' /* short form */ +sh([script: 'echo hello']) /* long form */ +---- diff --git a/content/doc/book/pipeline/multibranch.adoc b/content/doc/book/pipeline/multibranch.adoc index 4ddc572158b7..1e7ffdfa18a3 100644 --- a/content/doc/book/pipeline/multibranch.adoc +++ b/content/doc/book/pipeline/multibranch.adoc @@ -61,3 +61,73 @@ the rest of the source code you are working on. the `BRANCH_NAME` environment variable. In multibranch pipelines, the `checkout scm` step checks out the specific commit that the `Jenkinsfile` originated, so as to maintain branch integrity. + + + +== Making Pull Requests + +A pull request notifies the person responsible for maintaining a Jenkins +repository that you have a change or change set that you want to see merged into +the main branch associated with that repository. Each individual change is +called a "commit." + +You make pull requests from a command line, or by selecting the appropriately +labeled button (typically "Pull" or "Create Pull Request") in the interface for +your source code management system. + +A pull request to a repository included in or monitored by an Organization +Folder can be used to automatically execute a multibranch pipeline build. + + +== Using Organization Folders + +Organization folders enable Jenkins to automatically detect and include any new +repositories within them as resources. + +When you create a new repository (as might be the case for a new project), that +repository has a `Jenkinsfile`. If you also configure one or more organization +folders, Jenkins automatically detects any repository in an organization folder, +scans the contents of that repository at either default or configurable +intervals, and creates a Multibranch Pipeline project for what it finds in the +scan. An organization folder functions as a "parent," and any item within it is +treated as a "child" of that parent. + +Organization folders alleviate the need to manually create projects for new +repositories. When you use organization folders, Jenkins views your repositories +as a hierarchy, and each repository (organization folder) may optionally have +child elements such as branches or pull requests. + + +To create Organization folders: + +. Open Jenkins in your web browser. +. Go to: New Item → GitHub Organization or New Item → Bitbucket Team. +. Follow the configuration steps, making sure to specify appropriate scan + credentials and a specific owner for the GitHub Organization or Bitbucket Team + name. +. Set build triggers by selecting the checkbox associated with the trigger type + you want. Folder scans and the pipeline builds associated with those scans can + be initiated by command scripts or performed at defined intervals. They can also + triggered by project promotion or changes to the images in a monitored Docker + hub. +. Decide whether to automatically remove or retain unused items. "Orphaned Item + Strategy" fields in the configuration interface let you specify how many days to + keep old items, and how many old items to keep. If you enter no values in these + fields, unused items are removed by default. + +While configuring organization folders, you can set the following options: + +* Repository name pattern - a regular expression to specify which repositories are included in scans +* API endpoint - an alternate API endpoint to use a self-hosted GitHub Enterprise +* Checkout credentials - alternate credentials to use when checking out (cloning) code + +Multibranch Pipeline projects and Organization Folders are examples of +"computed folder" functionality. In Multibranch Pipeline projects, computation +creates child items for eligible branches. In Organization folders, computation +populates child items as individual Multibranch Pipelines for scanned +repositories. + +Select the "Folder Computation" section of your Jenkins interface to see the +duration (in seconds) and result (success or failure) of computation operations, +or to access a Folder Computation Log that provides more detail about this +activity.