Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup conda-store helm charts for automated build and publish #365

Merged
merged 4 commits into from
Aug 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
78 changes: 78 additions & 0 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -103,3 +103,81 @@ jobs:
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

build-publish-helm-chart:
name: Build and publish Helm chart
runs-on: ubuntu-latest
permissions:
contents: read
packages: write

steps:
- name: Setup the repository
uses: actions/checkout@v3
with:
fetch-depth: 0

- name: Setup Python
uses: actions/setup-python@v3

- name: Install chart publishing dependencies
run: |
pip install chartpress pyyaml
pip list
helm version

- name: Set up QEMU
uses: docker/setup-qemu-action@v1

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1

- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
HarshCasper marked this conversation as resolved.
Show resolved Hide resolved

- name: Configure a git user
run: |
git config --global user.email "[email protected]"
git config --global user.name "GitHub Actions user"

- name: Build and publish Helm chart with chartpress
env:
GITHUB_ACTOR: ""
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
run: |
PUBLISH_ARGS="--push --publish-chart \
--builder=docker-buildx \
--platform=linux/amd64 \
--platform=linux/arm64"

# chartpress needs to run next to resources/helm/chartpress.yaml
cd resources/helm/

# chartpress use git to push to our Helm chart repository.
# Ensure that the permissions to the Docker registry are
# already configured.

if [[ $GITHUB_REF != refs/tags/* ]]; then
# Using --extra-message, we help readers of merged PRs to know what version
# they need to bump to in order to make use of the PR.
#
# ref: https://github.com/jupyterhub/chartpress#usage
#
# NOTE: GitHub merge commits contain a PR reference like #123. `sed` is used
# to extract a PR reference like #123 or a commit hash reference like
# @123abcd.

PR_OR_HASH=$(git log -1 --pretty=%h-%B | head -n1 | sed 's/^.*\(#[0-9]*\).*/\1/' | sed 's/^\([0-9a-f]*\)-.*/@\1/')
LATEST_COMMIT_TITLE=$(git log -1 --pretty=%B | head -n1)
EXTRA_MESSAGE="${GITHUB_REPOSITORY}${PR_OR_HASH} ${LATEST_COMMIT_TITLE}"

chartpress $PUBLISH_ARGS --extra-message "${EXTRA_MESSAGE}"
else
# Setting a tag explicitly enforces a rebuild if this tag had already been
# built and we wanted to override it.

chartpress $PUBLISH_ARGS --tag "${GITHUB_REF:10}"
fi
27 changes: 27 additions & 0 deletions resources/helm/chartpress.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# This is configuration for chartpress, a CLI for Helm chart management.
#
# chartpress is used to test, package, and publish the conda-store Helm chart
#
# chartpress is used to:
# - Build images for multiple CPU architectures
# - Update Chart.yaml (version) and values.yaml (image tags)
# - Package and publish Helm charts to a GitHub based Helm chart repository
#
# Configuration reference:
# https://github.com/jupyterhub/chartpress#configuration
#
charts:
- name: conda-store
imagePrefix: quansight/
repo:
git: quansight/conda-store-helm-chart # Not yet published
published: https://quansight.github.io/conda-store-helm-chart # Not yet published
images:
conda-store:
imageName: quansight/conda-store
contextPath: ../../conda-store
valuesPath: {}
conda-store-server:
imageName: quansight/conda-store-server
contextPath: ../../conda-store-server
valuesPath: {}
11 changes: 11 additions & 0 deletions resources/helm/conda-store/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Chart.yaml v2 reference: https://helm.sh/docs/topics/charts/#the-chartyaml-file
apiVersion: v2
name: conda-store
version: 0.4.6-n005.h04d9611
appVersion: 0.4.7
description: Serve identical Conda environments in as many ways as possible
home: https://conda-store.readthedocs.io/
sources:
- https://github.com/Quansight/conda-store
icon: https://github.com/Quansight.png
kubeVersion: ">=1.20.0-0"
232 changes: 232 additions & 0 deletions resources/helm/conda-store/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,232 @@
gateway:
# Number of instances of the conda-store-server to run
replicas: 1

# Annotations to apply to the conda-store-server pods
annotations: {}

# Resource requests/limits for the conda-store-server pod
resources: {}

# Path prefix to serve conda-store-server api requests under
prefix: /

# The conda-store-server log level
loglevel: INFO

# The image to use for the conda-store-server pod
image:
name: quansight/conda-store-server
tag: "set-by-chartpress"
pullPolicy: IfNotPresent

imagePullSecrets: []

# Configuration for the conda-store-server
service:
annotations: {}

auth:
# The auth type to use. One of {simple, kerberos, jupyterhub, custom}.
type: simple

simple:
# A shared password to use for all users.
password:

kerberos:
# Path to the HTTP keytab for this node.
keytab:

jupyterhub:
apiToken:
apiUrl:

custom:
# The full authenticator class name.
class:

# Configuration fields to set on the authenticator class.
config: {}

livenessProbe:
# Enables the livenessProbe.
enabled: true
# Configures the livenessProbe.
initialDelaySeconds: 5
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 6
readinessProbe:
# Enables the readinessProbe.
enabled: true
# Configures the readinessProbe.
initialDelaySeconds: 5
timeoutSeconds: 2
periodSeconds: 10
failureThreshold: 3

# nodeSelector, affinity, and tolerations the for the `api` pod conda-store-server
nodeSelector: {}
affinity: {}
tolerations: []

extraConfig: {}

backend:
image:
# The image to use for both schedulers and workers
name: quansight/conda-store
tag: "set-by-chartpress"
pullPolicy: IfNotPresent

namespace:

# A mapping of environment variables to set for both schedulers and workers.
environment: {}

scheduler:
extraPodConfig: {}

extraContainerConfig: {}

# Cores request/limit for the scheduler.
cores:
request:
limit:

# Memory request/limit for the scheduler.
memory:
request:
limit:

worker:
extraPodConfig: {}

extraContainerConfig: {}

# Cores request/limit for each worker.
cores:
request:
limit:

# Memory request/limit for each worker.
memory:
request:
limit:

threads:

controller:
enabled: true

# Any annotations to add to the controller pod
annotations: {}

# Resource requests/limits for the controller pod
resources: {}

# Image pull secrets for controller pod
imagePullSecrets: []

# The controller log level
loglevel: INFO

# Max time (in seconds) to keep around records of completed clusters.
# Default is 24 hours.
completedClusterMaxAge: 86400

# Time (in seconds) between cleanup tasks removing records of completed
# clusters. Default is 5 minutes.
completedClusterCleanupPeriod: 600

# Base delay (in seconds) for backoff when retrying after failures.
backoffBaseDelay: 0.1

# Max delay (in seconds) for backoff when retrying after failures.
backoffMaxDelay: 300

# Limit on the average number of k8s api calls per second.
k8sApiRateLimit: 50

# Limit on the maximum number of k8s api calls per second.
k8sApiRateLimitBurst: 100

# The image to use for the controller pod.
image:
name: quansight/conda-store-server
tag: "set-by-chartpress"
pullPolicy: IfNotPresent

# Settings for nodeSelector, affinity, and tolerations for the controller pods
nodeSelector: {}
affinity: {}
tolerations: []

# traefik nested config relates to the traefik Pod and Traefik running within it
# that is acting as a proxy for traffic towards the gateway
traefik:
# Number of instances of the proxy to run
replicas: 1

# Any annotations to add to the proxy pods
annotations: {}

# Resource requests/limits for the proxy pods
resources: {}

# The image to use for the proxy pod
image:
name: traefik
tag: "2.6.3"
pullPolicy: IfNotPresent
imagePullSecrets: []

# Any additional arguments to forward to traefik
additionalArguments: []

# The proxy log level
loglevel: WARN

# Whether to expose the dashboard on port 9000 (enable for debugging only!)
dashboard: false

# Additional configuration for the traefik service
service:
type: LoadBalancer
annotations: {}
spec: {}
ports:
web:
port: 80
nodePort:
tcp:
port: web
nodePort:

nodeSelector: {}
affinity: {}
tolerations: []

# rbac nested configuration relates to the choice of creating or replacing
# resources like (Cluster)Role, (Cluster)RoleBinding, and ServiceAccount.
rbac:
enabled: true

# Existing names to use if ClusterRoles, ClusterRoleBindings, and
# ServiceAccounts have already been created by other means (leave set to
# `null` to create all required roles at install time)
controller:
serviceAccountName:

gateway:
serviceAccountName:

traefik:
serviceAccountName:

# global nested configuration is accessible by all Helm charts that may depend
# on each other, but not used by this Helm chart. An entry is created here to
# validate its use and catch YAML typos via this configurations associated JSON
# schema.
global: {}