-
Notifications
You must be signed in to change notification settings - Fork 37
FAQ
The most simple way is to create a Jenkinsfile in your git repo and create a multi-branch pipeline job in your Jenkins instance. See https://jenkins.io/doc/pipeline/tour/hello-world/ for more information. See below a simple Jenkinsfile. Note that the full list of available tools name can be found in the Tools (JDK, Maven, Ant) section.
pipeline {
agent any
tools {
maven 'apache-maven-latest'
jdk 'temurin-jdk17-latest'
}
options {
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '5'))
}
stages {
stage('Build') {
steps {
sh '''
java -version
mvn -v
'''
}
}
}
post {
// send a mail on unsuccessful and fixed builds
unsuccessful { // means unstable || failure || aborted
emailext subject: 'Build $BUILD_STATUS $PROJECT_NAME #$BUILD_NUMBER!',
body: '''Check console output at $BUILD_URL to view the results.''',
recipientProviders: [culprits(), requestor()],
to: '[email protected]'
}
fixed { // back to normal
emailext subject: 'Build $BUILD_STATUS $PROJECT_NAME #$BUILD_NUMBER!',
body: '''Check console output at $BUILD_URL to view the results.''',
recipientProviders: [culprits(), requestor()],
to: '[email protected]'
}
}
}
- In general, you can use a pre-built/custom docker image and Jenkins pipelines, see https://wiki.eclipse.org/Jenkins#How_do_I_run_my_build_in_a_custom_container.3F.
- If your project requires UI test-specific dependencies (e.g. metacity, mutter), you can try to use the
ubuntu-latest
pod template. The list of installed applications can be found here (it does not show all dependencies): https://github.com/eclipse-cbi/jiro-agents/blob/master/ubuntu/Dockerfile - If it does not work, use a pre-build/custom docker image.
For freestyle jobs, the label can be specified in the job configuration under "Restrict where this project can be run":
Example for pipeline jobs:
pipeline {
agent {
kubernetes {
label 'ubuntu-latest'
}
}
tools {
maven 'apache-maven-latest'
jdk 'temurin-jdk17-latest'
}
options {
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '5'))
}
stages {
stage('Build') {
steps {
wrap([$class: 'Xvnc', takeScreenshot: false, useXauthority: true]) {
sh 'mvn clean verify'
}
}
}
}
post {
//...
}
}
pipeline {
agent {
kubernetes {
inheritFrom 'ubuntu-latest'
yaml """
spec:
containers:
- name: jnlp
resources:
limits:
memory: "4Gi"
cpu: "4000m"
requests:
memory: "4Gi"
cpu: "2000m"
"""
}
}
stages {
stage('Main') {
steps {
sh 'hostname'
}
}
}
}
You need to use a Jenkins pipeline to do so. Then you can specify a Kubernetes pod template. See an example below.
You can either use already existing "official" docker images, for example the maven:<version>-alpine
images or create your own custom docker image.
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:alpine
command:
- cat
tty: true
- name: php
image: php:7.2.10-alpine
command:
- cat
tty: true
- name: hugo
image: eclipsecbi/hugo:0.110.0
command:
- cat
tty: true
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
container('php') {
sh 'php -version'
}
container('hugo') {
sh 'hugo -version'
}
}
}
}
}
See the Kubernetes Jenkins plugin for more documentation.
For security reasons, you cannot do that. We run an infrastructure open to the internet, which potentially runs stuff from non-trusted code (e.g., PR) so we need to follow a strict policy to protect the common good.
More specifically, we run containers using an arbitrarily assigned user ID (e.g. 1000100000) in our OpenShift cluster. The group ID is always root (0) though. The security context constraints we use for running projects' containers are "restricted". You cannot change this level from your podTemplate
.
Unfortunately, most images you can find on DockerHub (including official images) do not support running as an arbitrary user. Actually, most of them expect to run as root, which is definitely a bad practice.
OpenShift publishes guidelines with best practices about how to create Docker images. More specifically, see the section about how to support running with arbitrary user ID.
To test if an image is ready to be run with an arbitrarily assigned user ID, you can try to start it with the following command line:
$ docker run -it --rm -u $((1000100000 + RANDOM % 100000)):0 image/name:tag
You can use and integrate the Eclipse Foundation Jenkins shared library named: jenkins-pipeline-library.
This library proposes a containerBuild function for building docker images in the Eclipse Foundation infrastructure.
@Library('releng-pipeline') _
pipeline {
agent any
environment {
HOME = "${env.WORKSPACE}"
}
stages {
stage('build') {
agent {
kubernetes {
yaml loadOverridableResource(
libraryResource: 'org/eclipsefdn/container/agent.yml'
)
}
}
steps {
container('containertools') {
containerBuild(
credentialsId: '<jenkins-credential-id>',
name: 'docker.io/<namespace-name>/<container-name>',
version: 'latest'
)
}
}
}
}
}
You need to specify the tools
persistence volume.
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: custom-name
image: my-custom-image:latest
tty: true
command:
- cat
volumeMounts:
- name: tools
mountPath: /opt/tools
volumes:
- name: tools
persistentVolumeClaim:
claimName: tools-claim-jiro-<project_shortname>
"""
}
}
stages {
stage('Run maven') {
steps {
container('custom-name') {
sh '/opt/tools/apache-maven/latest/bin/mvn -version'
}
}
}
}
}
Important
Do not forget to replace <project_shortname>
in the claimName with your project name (e.g. tools-claim-jiro-cbi
for the CBI project).
Due to recent changes in the Jenkins Kubernetes plugin, you need to specify an empty dir volume for /home/jenkins
, if your build uses a directory like /home/jenkins/.ivy2
or /home/jenkins/.npm
.
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: custom-name
image: my-custom-image:latest
tty: true
command:
- cat
volumeMounts:
- mountPath: "/home/jenkins"
name: "jenkins-home"
readOnly: false
volumes:
- name: "jenkins-home"
emptyDir: {}
"""
}
}
stages {
stage('Run maven') {
steps {
container('custom-name') {
sh 'mkdir -p /home/jenkins/foobar'
}
}
}
}
}
Note
We are not satisfied with this workaround and are actively looking for a more convenient way to let projects use custom containers without specifying a bunch of volume mounts.
You cannot just cp
stuff to a folder. You need to do that with ssh
and scp
while connecting to projects-storage.eclipse.org
. Therefore, SSH credentials need to be set up on the project's Jenkins instance. This is already set up by default for all instances on our infrastructure.
This service provides access to the Eclipse Foundation file servers storage:
/home/data/httpd/download.eclipse.org
/home/data/httpd/archive.eclipse.org
/home/data/httpd/download.polarsys.org
/home/data/httpd/download.locationtech.org
Depending on how you run your build, the way you will use them is different. See the different cases below.
You need to activate the "SSH Agent" plugin in your job configuration and select the proper credentials genie.<projectname> (ssh://projects-storage.eclipse.org)
.
File:project-storage-ssh-agent.png
Then you can use ssh
, scp
, rsync
and sftp
commands to deploy artifacts to the server, e.g.,
scp -o BatchMode=yes target/my_artifact.jar genie.<projectname>@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/<projectname>/
ssh -o BatchMode=yes genie.<projectname>@projects-storage.eclipse.org ls -al /home/data/httpd/download.eclipse.org/<projectname>/
rsync -a -e ssh <local_dir> genie.<projectname>@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/<projectname>/
It is possible to deploy build output from within Maven, using Maven Wagon and wagon-ssh-external. As the build environment uses an SSH agent, the Maven Wagon plugins must use the external SSH commands so that the agent is used.
If the build outputs are executables or p2 update site and not Maven artifacts then the standard maven deploy needs to be disabled. E.g. with this in the appropriate profile in the parent/pom.xml
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<plugin>
<artifactId>maven-deploy-plugin</artifactId>
<configuration>
<skip>true</skip>
</configuration>
</plugin>
Define some properties for the destination in parent/pom.xml:
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<download-publish-path>/home/data/httpd/download.eclipse.org/[projectname]/snapshots/update-site</download-publish-path>
<download-remote-publish-path>genie.[projectname]@projects-storage.eclipse.org:/home/data/httpd/download.eclipse.org/[projectname]/snapshots/update-site</download-remote-publish-path>
Define the Wagon transport in parent/pom.xml:
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<build>
<plugins>
<plugin>
...
</plugin>
</plugins>
<extensions>
<extension>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ssh-external</artifactId>
<version>3.0.0</version>
</extension>
</extensions>
</build>
Do the actual upload during the deploy phase (be sure to add that to the Maven invocation).
<source lang="xml" style="border:1px solid;padding: 5px; margin: 5px;">
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>wagon-maven-plugin</artifactId>
<version>2.0.0</version>
<executions>
<execution>
<id>prepare-publish</id>
<phase>deploy</phase>
<goals>
<goal>sshexec</goal>
</goals>
<configuration>
<url>scpexe://${download-remote-publish-path}</url>
<commands>
<command>rm -rf ${download-publish-path}/*</command>
</commands>
</configuration>
</execution>
<execution>
<id>publish</id>
<phase>deploy</phase>
<goals>
<goal>upload</goal>
</goals>
<configuration>
<fromDir>target/repository</fromDir>
<includes>*/**</includes>
<url>scpexe://${download-remote-publish-path}</url>
<toDir></toDir>
</configuration>
</execution>
</executions>
</plugin>
This uses the sshexec
goal to delete old files and upload to copy new files. Note */**
for all directories.
<toDir></toDir>
appears to be relative to the path given in the URL.
Be careful with paths and properties to ensure you upload to the correct place and do not delete the wrong thing.
Eclipse Memory Analyzer uses the above with Maven Wagon to deploy the snapshot nightly builds.
pipeline {
agent any
stages {
stage('stage 1') {
...
}
stage('Deploy') {
steps {
sshagent ( ['projects-storage.eclipse.org-bot-ssh']) {
sh '''
ssh -o BatchMode=yes [email protected] rm -rf /home/data/httpd/download.eclipse.org/projectname/snapshots
ssh -o BatchMode=yes [email protected] mkdir -p /home/data/httpd/download.eclipse.org/projectname/snapshots
scp -o BatchMode=yes -r repository/target/repository/* [email protected]:/home/data/httpd/download.eclipse.org/projectname/snapshots
'''
}
}
}
}
}
Important
A 'jnlp' container is automatically added, when a custom pod template is used to ensure connectivity between the Jenkins master and the pod. If you want to deploy files to download.eclipse.org, you only need to specify the known-hosts volume for the JNLP container (as seen below) to avoid "host verification failed" errors.
pipeline {
agent {
kubernetes {
label 'my-pod'
yaml '''
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:alpine
command:
- cat
tty: true
- name: jnlp
volumeMounts:
- name: volume-known-hosts
mountPath: /home/jenkins/.ssh
volumes:
- name: volume-known-hosts
configMap:
name: known-hosts
'''
}
}
stages {
stage('Build') {
steps {
container('maven') {
sh 'mvn clean verify'
}
}
}
stage('Deploy') {
steps {
container('jnlp') {
sshagent ( ['projects-storage.eclipse.org-bot-ssh']) {
sh '''
ssh -o BatchMode=yes [email protected] rm -rf /home/data/httpd/download.eclipse.org/projectname/snapshots
ssh -o BatchMode=yes [email protected] mkdir -p /home/data/httpd/download.eclipse.org/projectname/snapshots
scp -o BatchMode=yes -r repository/target/repository/* [email protected]:/home/data/httpd/download.eclipse.org/projectname/snapshots
'''
}
}
}
}
}
}
Every JIPP has a Maven settings file set up that specifies our local Nexus instance as cache for Maven Central.
Important
In Jiro this works out of the box for the default pod templates. No additional configuration is required for Freestyle and Pipeline jobs. For custom containers, see the following section.
You need to add the settings-xml
volume, as shown below. Please note, the m2-repo
volume is required as well, otherwise /home/jenkins/.m2/repository
is not writable.
Note
In custom containers the user.home
environment variable needs to be set to /home/jenkins
via MAVEN_OPTS, otherwise settings.xml
and settings-security.xml
can not be found.
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:alpine
tty: true
command:
- cat
env:
- name: "MAVEN_OPTS"
value: "-Duser.home=/home/jenkins"
volumeMounts:
- name: settings-xml
mountPath: /home/jenkins/.m2/settings.xml
subPath: settings.xml
readOnly: true
- name: m2-repo
mountPath: /home/jenkins/.m2/repository
volumes:
- name: settings-xml
secret:
secretName: m2-secret-dir
items:
- key: settings.xml
path: settings.xml
- name: m2-repo
emptyDir: {}
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
}
}
}
}
If your project does not have its own repo on Nexus yet, then open a HelpDesk issue and specify what project you'd like a Nexus repo for.
If your project does have its own repo on Nexus already, then you can use Maven (or Gradle) to deploy artifacts to repo.eclipse.org.
Note
On our cluster-based infra (Jiro), a separate Maven settings file for deployment to Nexus is not required. All information is contained in the default Maven settings file located at /home/jenkins/.m2/settings.xml
, which does not need to be specified explicitly in your job configuration.
You need to add the settings-xml
volume, as shown below. Please note, the m2-repo
volume is required as well, otherwise /home/jenkins/.m2/repository is not writable
.
Note
In custom containers the user.home
environment variable needs to be set to /home/jenkins
via MAVEN_OPTS, otherwise settings.xml
and settings-security.xml
can not be found.
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:alpine
tty: true
command:
- cat
env:
- name: "MAVEN_OPTS"
value: "-Duser.home=/home/jenkins"
volumeMounts:
- name: settings-xml
mountPath: /home/jenkins/.m2/settings.xml
subPath: settings.xml
readOnly: true
- name: settings-security-xml
mountPath: /home/jenkins/.m2/settings-security.xml
subPath: settings-security.xml
readOnly: true
- name: m2-repo
mountPath: /home/jenkins/.m2/repository
volumes:
- name: settings-xml
secret:
secretName: m2-secret-dir
items:
- key: settings.xml
path: settings.xml
- name: settings-security-xml
secret:
secretName: m2-secret-dir
items:
- key: settings-security.xml
path: settings-security.xml
- name: m2-repo
emptyDir: {}
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn clean deploy'
}
}
}
}
}
Deploying artifacts to OSSRH (OSS Repository Hosting provided by Sonatype) requires an account at OSSRH. It is also required to sign all artifacts with GPG. The Eclipse IT team will set this up for the project.
Please open a HelpDesk issue for this first.
Note
On our cluster-based infra (Jiro), a separate Maven settings file for deployment to OSSRH is not necessary. All information is contained in the default Maven settings file located at /home/jenkins/.m2/settings.xml
, which does not need to be specified explicitly in your job configuration.
If you are using a custom container, please see https://wiki.eclipse.org/Jenkins#Custom_container_on_Jiro_3**
Steps | Screenshots |
---|---|
1. Insert secret-subkeys.asc as secret file in job |
|
2. Import GPG keyring with --batch and trust the keys non-interactively in a shell build step (before the Maven call)gpg --batch --import "${KEYRING}"
for fpr in $(gpg --list-keys --with-colons | awk -F: '/fpr:/ {print $10}' | sort -u);
do
echo -e "5\ny\n" | gpg --batch --command-fd 0 --expert --edit-key $fpr trust;
done |
|
3. If a newer GPG version (> 2.1+) is used, --pinentry-mode loopback needs to be added as GPG argument in the pom.xml.<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
<configuration>
<gpgArguments>
<arg>--pinentry-mode</arg>
<arg>loopback</arg>
</gpgArguments>
</configuration>
</execution>
</executions>
</plugin> |
This is a simple pipeline job, that allows to test the GPG signing.
pipeline {
agent any
tools {
maven 'apache-maven-latest'
jdk 'adoptopenjdk-hotspot-jdk8-latest'
}
stages {
stage('Build') {
steps {
sh "mvn -B -U archetype:generate -DgroupId=com.mycompany.app -DartifactId=my-app -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false"
sh '''cat >my-app/pom.xml <<EOL
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.mycompany.app</groupId>
<artifactId>my-app</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>my-app</name>
<url>http://maven.apache.org</url>
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-gpg-plugin</artifactId>
<version>1.6</version>
<executions>
<execution>
<id>sign-artifacts</id>
<phase>verify</phase>
<goals>
<goal>sign</goal>
</goals>
<configuration>
<gpgArguments>
<arg>--pinentry-mode</arg>
<arg>loopback</arg>
</gpgArguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
EOL'''
withCredentials([file(credentialsId: 'secret-subkeys.asc', variable: 'KEYRING')]) {
sh 'gpg --batch --import "${KEYRING}"'
sh 'for fpr in $(gpg --list-keys --with-colons | awk -F: \'/fpr:/ {print $10}\' | sort -u); do echo -e "5\ny\n" | gpg --batch --command-fd 0 --expert --edit-key ${fpr} trust; done'
}
sh "mvn -B -f my-app/pom.xml clean verify"
sh 'gpg --verify my-app/target/my-app-1.0-SNAPSHOT.jar.asc'
}
}
}
}
When you are using a custom container on Jiro, you will need to add the settings-xml
and settings-security-xml
volumes, as shown below.
Please note:
- the m2-repo volume is required as well, otherwise
/home/jenkins/.m2/repository
is not writable - the
toolchains-xml
volume is optional, but added for completeness - you also might need to add additional volumes like
volume-known
-hosts (as described here: https://wiki.eclipse.org/Jenkins#How_do_I_deploy_artifacts_to_download.eclipse.org.3F)
pipeline {
agent {
kubernetes {
label 'my-agent-pod'
yaml """
apiVersion: v1
kind: Pod
spec:
containers:
- name: maven
image: maven:alpine
tty: true
command:
- cat
volumeMounts:
- name: settings-xml
mountPath: /home/jenkins/.m2/settings.xml
subPath: settings.xml
readOnly: true
- name: toolchains-xml
mountPath: /home/jenkins/.m2/toolchains.xml
subPath: toolchains.xml
readOnly: true
- name: settings-security-xml
mountPath: /home/jenkins/.m2/settings-security.xml
subPath: settings-security.xml
readOnly: true
- name: m2-repo
mountPath: /home/jenkins/.m2/repository
volumes:
- name: settings-xml
secret:
secretName: m2-secret-dir
items:
- key: settings.xml
path: settings.xml
- name: toolchains-xml
configMap:
name: m2-dir
items:
- key: toolchains.xml
path: toolchains.xml
- name: settings-security-xml
secret:
secretName: m2-secret-dir
items:
- key: settings-security.xml
path: settings-security.xml
- name: m2-repo
emptyDir: {}
"""
}
}
stages {
stage('Run maven') {
steps {
container('maven') {
sh 'mvn -version'
}
}
}
}
}
Error message | Solution |
---|---|
gpg: signing failed: Not a tty |
GPG version > 2.1 is used and --pinentry-mode loopback needs to be added to the maven-gpg-plugin config in the pom.xml (see above). |
gpg: invalid option "--pinentry-mode" |
GPG version < 2.1 is used and --pinentry-mode loopback needs to be removed from the maven-gpg-plugin config in the pom.xml |
gpg: no default secret key: No secret key or gpg: signing failed: No secret key
|
GPG keyring needs to be imported (see above) |
Integrating SonarCloud into a project's CI builds is quite easy. Please open a HelpDesk issue for this and the releng/webmaster team will help you set this up.
We're setting this up with the webmaster's SonarCloud.io account, so there is no need to provide a SonarCloud token. To avoid leaking the token in the console log, we store it as secret text (Jenkins credentials).
We will configure the SonarQube Jenkins plugin to use SonarCloud to achieve a slightly better integration with Jenkins. For example, a link to SonarCloud will show up in the left menu of a job page.
For a freestyle job configuration, two things need to be done:
- Under "Build environment" enable "Use secret text(s) or file(s)", add "Secret text", name the variable "SONARCLOUD_TOKEN" and select the right credential (e.g. "Sonarcloud token").
- Either a shell build step or a Maven build step can be used to run the sonar goal with the right parameters:
mvn clean verify sonar:sonar -Dsonar.projectKey=<project-name> -Dsonar.organization=<organization> -Dsonar.host.url=${SONAR_HOST_URL} -Dsonar.login=${SONARCLOUD_TOKEN}
For a pipeline job, the following needs to be added:
withCredentials([string(credentialsId: 'sonarcloud-token', variable: 'SONARCLOUD_TOKEN')]) {
withSonarQubeEnv('SonarCloud.io') {
mvn clean verify sonar:sonar -Dsonar.projectKey=<project-name> -Dsonar.organization=<organization> -Dsonar.host.url=${SONAR_HOST_URL} -Dsonar.login=${SONARCLOUD_TOKEN}
}
}
Please note: <project-name> and <organization> should be replaced with the corresponding project name and organization.
In general, we want to avoid handing out admin rights. In the spirit of "configuration as code", project members can submit pull requests to our Jiro GitHub repo and change the configuration of their CI instance. E.g. adding plugins, etc. This allows better tracking of configuration changes and rollback in case of issues.
We understand that some projects heavily rely on their admin permissions. We will make sure to find an amicable solution in those cases.
The preferred way is to open a pull request in the Jiro GitHub repo. For example, to add a new plugin to the CBI instance, one would need to edit https://github.com/eclipse-cbi/jiro/blob/master/instances/technology.cbi/config.jsonnet and add the ID of the plugin to the plugins+
section. If the jenkins+/plugins+ section does not exist yet, it needs to be added as well.
Example:
{
project+: {
fullName: "technology.cbi",
displayName: "Eclipse CBI",
},
jenkins+: {
plugins+: [
"jacoco",
],
}
}
Before adding a plugin, please verify that it's not already listed in https://github.com/eclipse-cbi/jiro/wiki/Default-Jenkins-plugins.
The ID of a Jenkins plugin can be found here: https://plugins.jenkins.io/
If this sounds too complicated, you can also open a HelpDesk issue.
The preferred static website generator for building Eclipse project websites is Hugo. You should first put your Hugo sources in a dedicated Git repository, either at GitHub or https://gitlab.eclipse.org. If you don't have such a repository already, feel free to open a HelpDesk issue and the Eclipse IT team will create one for you.
Once your Hugo sources are in the proper repository, create a file named Jenkinsfile
at the root of the repository with the following content (don't forget to specify the proper value for PROJECT_NAME
and PROJECT_BOT_NAME
environment variable):
pipeline {
agent {
kubernetes {
label 'hugo-agent'
yaml """
apiVersion: v1
metadata:
labels:
run: hugo
name: hugo-pod
spec:
containers:
- name: jnlp
volumeMounts:
- mountPath: /home/jenkins/.ssh
name: volume-known-hosts
env:
- name: "HOME"
value: "/home/jenkins"
- name: hugo
image: eclipsecbi/hugo:0.110.0
command:
- cat
tty: true
volumes:
- configMap:
name: known-hosts
name: volume-known-hosts
"""
}
}
environment {
PROJECT_NAME = "<project_name>" // must be all lowercase.
PROJECT_BOT_NAME = "<Project_name> Bot" // Capitalize the name
}
triggers { pollSCM('H/10 * * * *')
}
options {
buildDiscarder(logRotator(numToKeepStr: '5'))
checkoutToSubdirectory('hugo')
}
stages {
stage('Checkout www repo') {
steps {
dir('www') {
sshagent(['git.eclipse.org-bot-ssh']) {
sh '''
git clone ssh://genie.${PROJECT_NAME}@git.eclipse.org:29418/www.eclipse.org/${PROJECT_NAME}.git .
git checkout ${BRANCH_NAME}
'''
}
}
}
}
stage('Build website (master) with Hugo') {
when {
branch 'master'
}
steps {
container('hugo') {
dir('hugo') {
sh 'hugo -b https://www.eclipse.org/${PROJECT_NAME}/'
}
}
}
}
stage('Build website (staging) with Hugo') {
when {
branch 'staging'
}
steps {
container('hugo') {
dir('hugo') {
sh 'hugo -b https://staging.eclipse.org/${PROJECT_NAME}/'
}
}
}
}
stage('Push to $env.BRANCH_NAME branch') {
when {
anyOf {
branch "master"
branch "staging"
}
}
steps {
sh 'rm -rf www/* && cp -Rvf hugo/public/* www/'
dir('www') {
sshagent(['git.eclipse.org-bot-ssh']) {
sh '''
git add -A
if ! git diff --cached --exit-code; then
echo "Changes have been detected, publishing to repo 'www.eclipse.org/${PROJECT_NAME}'"
git config user.email "${PROJECT_NAME}[email protected]"
git config user.name "${PROJECT_BOT_NAME}"
git commit -m "Website build ${JOB_NAME}-${BUILD_NUMBER}"
git log --graph --abbrev-commit --date=relative -n 5
git push origin HEAD:${BRANCH_NAME}
else
echo "No changes have been detected since last build, nothing to publish"
fi
'''
}
}
}
}
}
}
Finally, you can create a multibranch pipeline job on your project's Jenkins instance. It will automatically be triggered on every new push to your Hugo source repository, build the website and push it to the target website repository. As mentioned above, the Eclipse Foundation website's infrastructure will eventually pull the content of the latter and your website will be published and available on https://www.eclipse.dev/\<project_name>.
If you don't have a Jenkins instance already, CBI#Requesting_a_JIPP_instance. If you need assistance with the process, please open a HelpDesk issue.
By default, Jenkins project configurations using the GitLab Branch Source plugin are set up with 'Trusted Members' as the default option.
Definition of 'trusted members': [Recommended] Discover merge requests from forked projects whose authors have Developer/Maintainer/Owner access levels in the origin project.
To accept contributions from contributors with fork project, the project should:
- Configure the CI project by changing 'Discover merge requests from forks' to 'Members'.
- The project lead should add the user to the list of collaborators in PMI.
- The contributor should change the forked project's visibility to public.