Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate Jenkins deployment from Bash to Groovy #253

Merged
merged 8 commits into from
Jan 7, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ FROM alpine AS downloader
RUN apk add curl grep
# When updating,
# * also update the checksum found at https://dl.k8s.io/release/v${K8S_VERSION}/bin/linux/amd64/kubectl.sha256
# * also update in init-cluster.sh. vars.tf, ApplicationConfigurator.groovy and apply.sh
# * also update in init-cluster.sh. vars.tf, Config.groovy and apply.sh
# When upgrading to 1.26 we can verify the kubectl signature with cosign!
# https://kubernetes.io/blog/2022/12/12/kubernetes-release-artifact-signing/
ARG K8S_VERSION=1.29.8
Expand Down
7 changes: 0 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -830,13 +830,6 @@ Jenkins is available at
* http://localhost:9090 (k3d)
* `scripts/get-remote-url jenkins default` (remote k8s)

You can enable browser notifications about build results via a button in the lower right corner of Jenkins Web UI.

Note that this only works when using `localhost` or `https://`.

<img src="docs/jenkins-enable-notifications.png" alt="Enable Jenkins Notifications" width="300" >
<img src="docs/jenkins-example-notification.png" alt="Example of a Jenkins browser notifications" width="300" >

###### External Jenkins

You can set an external jenkins server via the following parameters when applying the playground.
Expand Down
9 changes: 1 addition & 8 deletions docs/configuration.schema.json
Original file line number Diff line number Diff line change
Expand Up @@ -493,14 +493,7 @@
}
},
"helm" : {
"type" : [ "object", "null" ],
"properties" : {
"version" : {
"type" : [ "string", "null" ],
"description" : "The version of the Helm chart to be installed"
}
},
"additionalProperties" : false,
"$ref" : "#/$defs/HelmConfigWithValues-nullable",
"description" : "Common Config parameters for the Helm package manager: Name of Chart (chart), URl of Helm-Repository (repoURL) and Chart Version (version). Note: These config is intended to obtain the chart from a different source (e.g. in air-gapped envs), not to use a different version of a helm chart. Using a different helm chart or version to the one used in the GOP version will likely cause errors."
},
"mavenCentralMirror" : {
Expand Down
44 changes: 40 additions & 4 deletions docs/developers.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,41 @@ Jenkins.instance.pluginManager.activePlugins.sort().each {
* Make sure you have updated `plugins.txt` with working versions of the plugins
* commit and push changes to your feature-branch and submit a pr

Note that `plugins.txt` contains the whole dependency tree, including transitive plugin dependencies.
The bare minimum of plugins that are needed is this:

```shell
docker-workflow # Used in example builds
git # Used in example builds
junit # Used in example builds
pipeline-utility-steps # Used in example builds, by gitops-build-lib
pipeline-stage-view # Only necessary for better visualization of the builds
prometheus # Necessary to fill Jenkins dashboard in Grafana
scm-manager # Used in example builds
workflow-aggregator # Pipelines plugin, used in example builds
```

Note that, when running locally we also need `kubernetes` and `configuration-as-code` but these are contained in [our
jenkins helm image](https://github.com/cloudogu/jenkins-helm-image/blob/5.8.1-1/Dockerfile#L2) (extracted from the
[corresponding helm chart version](https://github.com/jenkinsci/helm-charts/blob/jenkins-5.8.1/charts/jenkins/values.yaml#L406-L409)).


### Updating all plugins
To get a minimal list of plugins, start an empty jenkins that uses [the base image of our image](https://github.com/cloudogu/jenkins-helm-image/blob/main/Dockerfile):

```shell
docker run --rm -v $RANDOM-tmp-jenkins:/var/jenkins_home jenkins/jenkins:2.479.2-jdk17
```
We need a volume to persist the plugins when jenkins restarts.
(These can be cleaned up afterwards like so: `docker volume ls -q | grep jenkins | xargs -I {} docker volume rm {}`).

Then
* manually install the bare minimum of plugins mentioned above
* extract the plugins using the groovy console as mentioned above
* Write the output into `plugins.txt`

We should automate this!

## Local development

* Run locally
Expand Down Expand Up @@ -229,7 +264,7 @@ repository so need to be upgraded regularly.
* Kubernetes [in Terraform](../terraform/vars.tf) and locally [k3d](../scripts/init-cluster.sh),
* [k3d](../scripts/init-cluster.sh)
* [Groovy libs](../pom.xml) + [Maven](../.mvn/wrapper/maven-wrapper.properties)
* Installed components
* Installed components, most versions are maintained in [Config.groovy](../src/main/groovy/com/cloudogu/gitops/config/Config.groovy)
* Jenkins
* Helm Chart
* Plugins
Expand All @@ -240,9 +275,10 @@ repository so need to be upgraded regularly.
* SCM-Manager Helm Chart + Plugins
* Docker Registry Helm Chart
* ArgoCD Helm Chart
* Grafana + Prometheus [Helm Charts](../src/main/groovy/com/cloudogu/gitops/ApplicationConfigurator.groovy)
* Vault + ExternalSerets Operator [Helm Charts](../src/main/groovy/com/cloudogu/gitops/ApplicationConfigurator.groovy)
* Ingress-nginx [Helm Charts](https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx)
* Grafana + Prometheus Helm Charts
* Vault + ExternalSerets Operator Helm Charts
* Ingress-nginx Helm Charts
* Cert-Manager
* Mailhog
* Applications
* GitOps-build-lib + `buildImages`
Expand Down
Binary file removed docs/jenkins-enable-notifications.png
Binary file not shown.
Binary file removed docs/jenkins-example-notification.png
Binary file not shown.
22 changes: 0 additions & 22 deletions jenkins/tmp-docker-gid-grepper.yaml

This file was deleted.

58 changes: 26 additions & 32 deletions jenkins/values.yaml → jenkins/values.ftl.yaml
Original file line number Diff line number Diff line change
@@ -1,27 +1,30 @@
# For updating, delete pvc jenkins-docker-client
# When updating, we should not use too recent version, to not break support for LTS distros like debian
# https://docs.docker.com/engine/install/debian/#os-requirements -> oldstable
# For example:
# $ curl -s https://download.docker.com/linux/debian/dists/bullseye/stable/binary-amd64/Packages | grep -EA5 'Package\: docker-ce$' | grep Version | sort | uniq | tail -n1
# Version: 5:27.1.1-1~debian.11~bullseye
dockerClientVersion: 27.1.2
dockerClientVersion: ${config.jenkins.internalDockerClientVersion}

controller:
image:
registry: ghcr.io
repository: cloudogu/jenkins-helm
# Use same version here as in ApplicationConfigurator.groovy (config.jenkins['helm']['version'])
tag: "5.5.11"
# The image corresponds to the helm version,
# because it contains the default plugins for this particular chart version
tag: "${config.jenkins.helm.version}"
installPlugins: false

# to prevent the jenkins-ui-test pod being created
testEnabled: false

serviceType: LoadBalancer
serviceType: <#if config.application.remote>LoadBalancer<#else>NodePort</#if>
servicePort: 80
# Is ignored when type is LoadBalancer. For local cluster we change the service type to NodePort
nodePort: 9090

jenkinsUrl: ${config.jenkins.url}

<#if config.application.baseUrl?has_content>
ingress:
enabled: true
hostName: ${config.jenkins.ingress}

</#if>
# Don't use controller for builds
numExecutors: 0

Expand All @@ -30,16 +33,6 @@ controller:
node: jenkins

runAsUser: 1000
JCasC:
defaultConfig: true
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.

sidecars:
configAutoReload:
enabled: false

admin:
# Use reproducible admin password from secret. Change there, if necessary.
Expand All @@ -60,8 +53,8 @@ controller:
- name: create-agent-working-dir
securityContext:
runAsUser: 1000
# Note: When upgrading, use same image as in gid-grepper for performance reasons
image: bash:5
<#-- We use the same image for several tasks for performance and maintenance reasons -->
image: ${config.jenkins.internalBashImage}
imagePullPolicy: "{{ .Values.controller.imagePullPolicy }}"
command: [ "/usr/local/bin/bash", "-c" ]
args:
Expand All @@ -81,7 +74,6 @@ controller:
find docker -type f -not -name 'docker' -delete;
# Delete containerd, etc. We only need the docker CLI
# Note: "wget -O- | tar" leads to the folder being owned by root, even when creating it beforehand?!
# That's
volumeMounts:
- name: host-tmp
mountPath: /host-tmp
Expand All @@ -94,23 +86,25 @@ persistence:
path: /tmp

agent:
# In our local playground infrastructure builds are run in agent containers (pods). During the builds, more
# containers are started via the Jenkins Docker Plugin (on the same docker host).
# In our local playground infrastructure, builds are run in agent containers (pods).
# During the builds, more containers are started via the Jenkins Docker Plugin (on the same docker host).
# This leads to a scenario where the agent container tries to mount its filesystem into another container.
# The docker host is only able to realize this mounts when the mounted paths are the same inside and outside the
# The docker host is only able to realize these mounts when the mounted paths are the same inside and outside the
# containers.
# So as a workaround, we provide the path inside the container also outside the container.
# The /tmp folder is a good choice, because it is writable for all users on the host.
# One disadvantage is, that /tmp is deleted when the host shuts down. Which might slow down builds
# The /tmp folder is a good choice because it is writable for all users on the host.
# One disadvantage is that /tmp is deleted when the host shuts down.
# This might slow down builds.
# A different option would be to link the workspace into this repo.
# If we should ever want to implement this, the logic can be reused from Git History:
# https://github.com/cloudogu/gitops-playground/blob/61e033/scripts/apply.sh#L211-L235
# We mount the same PATH as a hostPath. See below.
# On Multi Node Clusters this leads to the requirement that Jenkins controller and agents run on the same host
# We realize this using nodeSelectors
workingDir: "/tmp/gitops-playground-jenkins-agent"
runAsUser: 1000
runAsGroup: 133
<#-- Note that setting the user as int seems to the value being ignored either by the helm chart or eventually by CasC plugin -->
runAsUser: <#if dockerGid?has_content>1000<#else>"0"</#if>
runAsGroup: <#if dockerGid?has_content>${dockerGid}<#else>"133"</#if>
nodeSelector:
node: jenkins
# Number of concurrent builds. Keep it low to avoid high CPU load.
Expand All @@ -123,7 +117,7 @@ agent:
envVars:
- name: PATH
# Add /tmp/docker to the path
value: /usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/docker
value: /opt/java/openjdk/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/docker
volumes:
- type: HostPath
# See workingDir
Expand All @@ -135,7 +129,7 @@ agent:
hostPath: /tmp/gitops-playground-jenkins-agent
mountPath: /home/jenkins
- type: HostPath
# For this demo, allow jenkins controller to access docker client
# When run locally, allow jenkins controller to access docker client
hostPath: /var/run/docker.sock
mountPath: /var/run/docker.sock
- type: HostPath
Expand Down
2 changes: 1 addition & 1 deletion scripts/init-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# See https://github.com/rancher/k3d/releases
# This variable is also read in Jenkinsfile
K3D_VERSION=5.7.4
# When updating please also adapt in Dockerfile, vars.tf and ApplicationConfigurator.groovy
# When updating please also adapt in Dockerfile, vars.tf and Config.groovy
K8S_VERSION=1.29.8
K3S_VERSION="rancher/k3s:v${K8S_VERSION}-k3s1"

Expand Down
65 changes: 2 additions & 63 deletions scripts/jenkins/init-jenkins.sh
Original file line number Diff line number Diff line change
Expand Up @@ -21,71 +21,10 @@ fi

function initJenkins() {
if [[ ${INTERNAL_JENKINS} == true ]]; then
deployLocalJenkins

setExternalHostnameIfNecessary "JENKINS" "jenkins" "default"
fi

configureJenkins
}

function deployLocalJenkins() {

# Mark the first node for Jenkins and agents. See jenkins/values.yamls "agent.workingDir" for details.
# Remove first (in case new nodes were added)
kubectl label --all nodes node- >/dev/null
kubectl label $(kubectl get node -o name | sort | head -n 1) node=jenkins

createSecret jenkins-credentials --from-literal=jenkins-admin-user=$JENKINS_USERNAME --from-literal=jenkins-admin-password=$JENKINS_PASSWORD -n default

helm repo add jenkins https://charts.jenkins.io
helm repo update jenkins
helm upgrade -i jenkins --values jenkins/values.yaml \
$(jenkinsHelmSettingsForLocalCluster) $(jenkinsIngress) $(setAgentGidOrUid) \
--version ${JENKINS_HELM_CHART_VERSION} jenkins/jenkins -n default
}

function jenkinsIngress() {

if [[ -n "${BASE_URL}" ]]; then
if [[ $URL_SEPARATOR_HYPHEN == true ]]; then
local jenkinsHost="jenkins-$(extractHost "${BASE_URL}")"
else
local jenkinsHost="jenkins.$(extractHost "${BASE_URL}")"
fi
local externalJenkinsUrl="$(injectSubdomain "${BASE_URL}" 'jenkins')"
echo "--set controller.jenkinsUrl=$JENKINS_URL --set controller.ingress.enabled=true --set controller.ingress.hostName=${jenkinsHost}"
else
echo "--set controller.jenkinsUrl=$JENKINS_URL"
fi
}

function jenkinsHelmSettingsForLocalCluster() {
if [[ $REMOTE_CLUSTER != true ]]; then
# We need a host port, so jenkins can be reached via localhost:9090
# But: This helm charts only uses the nodePort value, if the type is "NodePort". So change it for local cluster.
echo "--set controller.serviceType=NodePort"
fi
}

# Enable access for the Jenkins Agents Pods to the docker socket
function setAgentGidOrUid() {
# Try to find out the group ID (GID) of the docker group
kubectl apply -f jenkins/tmp-docker-gid-grepper.yaml >/dev/null
until kubectl get po --field-selector=status.phase=Running | grep tmp-docker-gid-grepper >/dev/null; do
sleep 1
done

local DOCKER_GID=$(kubectl exec tmp-docker-gid-grepper -- cat /etc/group | grep docker | cut -d: -f3)
if [[ -n "${DOCKER_GID}" ]]; then
echo "--set agent.runAsGroup=$DOCKER_GID"
else
# If the docker group cannot be found, run as root user
# Unfortunately, the root group (GID 0) usually does not have access to the docker socket. Last ressort: run as root.
# This will happen on Docker Desktop for Windows for example
error "Warning: Unable to determine Docker Group ID (GID). Jenkins Agent pods will run as root user (UID 0)!"
echo '--set agent.runAsUser=0'
fi
installPlugins
}

function waitForJenkins() {
Expand All @@ -99,7 +38,7 @@ function waitForJenkins() {
echo ""
}

function configureJenkins() {
function installPlugins() {
local pluginFolder

waitForJenkins
Expand Down
Loading