The prow cluster is where we run Prow, which currently does a lot of our CI, though we are trying to dogfood more and more.
tektoncd
uses
Prow
for CI automation, though we are moving this over to
use our own dogfooding.
- Prow runs in the tektoncd GCP project
- Ingress is configured to
prow.tekton.dev
- Prow results are displayed via gubernator
- Instructions for creating the Prow cluster
- Instructions for updating Prow and Prow's Tekton Pipelines instance
- Instructions for updating Prow configuration
See the community docs for more on Prow and the PR process, and see Prow's own docs.
Secrets which have been applied to the prow cluster but are not committed here are:
GitHub
personal access tokens:bot-token-github
in the default namespacebot-token-github
in the github-admin namespacehmac-token
for authenticating GitHuboauth-token
which is a GitHub access token fortekton-robot
, used by Prow itself as well as by containers started by Prow via the Prow config. See the GitHub secret Prow docs.
GCP
secrets:test-account
is a token for the service account[email protected]
. This account can interact with GCP resources such as uploading Prow results to GCS (which is done directly from the containers started by Prow, configured in config.yaml) and interacting with boskos clusters.- Nightly release secret:
nightly-account
a token for the nightly-release GCP service account
If you need to re-create the Prow cluster (which includes the boskos running inside), you will need to:
- Create a new cluster
- Create the necessary secrets
- Apply the new Prow and Boskos
- Setup ingress
- Update GitHub webhook(s)
To create a cluster of the right size, using the same GCP project:
export PROJECT_ID=tekton-releases
export CLUSTER_NAME=tekton-plumbing
gcloud container clusters create $CLUSTER_NAME \
--scopes=cloud-platform \
--enable-basic-auth \
--issue-client-certificate \
--project=$PROJECT_ID \
--region=us-central1-a \
--machine-type=n1-standard-4 \
--image-type=cos \
--num-nodes=8 \
--cluster-version=latest
Apply the Prow and boskos configuration:
# Deploy boskos
kubectl apply -f boskos/boskos.yaml # Must be applied first to create the namespace
kubectl apply -f boskos/boskos-config.yaml
kubectl apply -f boskos/storage-class.yaml
# Deploy GitHub Proxy
kubectl apply -f prow/gce-ssd-retain_storageclass.yaml
kubectl apply -f prow/ghproxy.yaml
# Deploy Prow
kubectl apply -f prow/prowjob-schemaless_customresourcedefinition.yaml
kubectl apply -f prow/prow.yaml
kubectl apply -f prow/cherrypicker_deployment.yaml
kubectl apply -f prow/cherrypicker_service.yaml
# Deploy daemonset to configure fs.inotify.max_user_[watches,instances] via sysctl.
# This is to deal with kind having issues like https://kind.sigs.k8s.io/docs/user/known-issues/#pod-errors-due-to-too-many-open-files
kubectl apply -f prow/tune-sysctls_daemonset.yaml
# Create Prow's configuration
kubectl create configmap config --from-file=config.yaml=prow/config.yaml
kubectl create configmap plugins --from-file=plugins.yaml=prow/plugins.yaml
To get ingress working properly, you must:
- Install and configure cert-manager.
cert-manager
can be installed viaHelm
using this guide - Apply the ingress resource and update the
prow.tekton.dev
DNS configuration.
To apply the ingress resource:
# Apply the ingress resource, configured to use `prow.tekton.dev`
kubectl apply -f prow/ingress.yaml
To see the IP of the ingress in the new cluster:
kubectl get ingress ing
You should be able to navigate to this endpoint in your browser and see the Prow landing page.
Then you can update https://prow.tekton.dev to point at the Cluster ingress address. (Not sure who has access to this domain name registration, someone in the Linux Foundation? dlorenc@ can provide more info.)
You will need to configure GitHubs's webhook(s) to point at the ingress of the new Prow cluster. (Or you can use the domain name.)
For tektoncd
this is configured at the Org level.
- github.com/tektoncd -> Settings -> Webhooks ->
http://some-ingress-ip/hook
Update the value of the webhook with http://ingress-address/hook
(see kicking the tires to get the ingress IP).
OAuth Setup is done following the official guide.
The "Prow" OAuth GitHub application is defined in the tektoncd
GitHub org.
Prow has been installed by taking the starter.yaml and modifying it for our needs.
Updating (e.g. bumping the versions of the images being used) requires:
-
If you are feeling cautious and motivated, manually backup the config values by hand (see prow.yaml to see what values will be changed).
-
Manually updating the
image
values and applying any other config changes found in the starter.yaml to our prow.yaml. -
Updating the
utility_images
in our config.yaml if the version of theplank
component is changed. -
Applying the new configuration with:
# Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases # Step 2: Update Prow itself kubectl apply -f prow/prow.yaml # Step 2: Update the configuration used by Prow kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f - # Step 3: Remember to configure kubectl to connect to your regular cluster! gcloud container clusters get-credentials ...
-
Verify that the changes are working by opening a PR and manually looking at the logs of each check, in case Prow has gotten into a state where failures are being reported as successes.
These values have been removed from the original starter.yaml:
- The
ConfigMap
valuesplugins
andconfig
because they are generated from config.yaml and plugin - The
Services
which were manually configured with aClusterIP
and other routing information (deck
,tide
,hook
) - The
Ingress
ing
- Configuration for this is in ingress.yaml - The
statusreconciler
Deployment, etc. - Created #54 to investigate adding this. - The
Role
values givepod
permissions in thedefault
namespace as well astest-pods
- The intention seems to be thattest-pods
be used to run the pods themselves, but we don't currently have that configured in our config.yaml.
Tekton Pipelines is also installed in the prow
cluster so that Prow can trigger the execution of
PipelineRuns
.
Prow supports pipelines v1alpha1 up to v0.13.1:
kubectl apply --filename https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.13.1/release.yaml
See also Tekton Pipelines installation instructions.
Changes to config.yaml are automatically applied to the Prow
cluster via a tekton task that
runs in the dogfooding
cluster.
To apply the configuration "manually":
# Step 1: Configure kubectl to use the cluster, doesn't have to be via gcloud but gcloud makes it easy
gcloud container clusters get-credentials prow --zone us-central1-a --project tekton-releases
# Step 2: Update the configuration used by Prow
kubectl create configmap config --from-file=config.yaml=prow/config.yaml --dry-run -o yaml | kubectl replace configmap config -f -
# Step 3: Remember to configure kubectl to connect to your regular cluster!
gcloud container clusters get-credentials ...