"Preserve your precious artifacts... in the cloud!"
ChartMuseum is an open-source Helm Chart Repository written in Go (Golang), with support for cloud storage backends, including Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, Alibaba Cloud OSS Storage and Openstack Object Storage.
Works as a valid Helm Chart Repository, and also provides an API for uploading new chart packages to storage etc.
Powered by some great Go technology:
- Kubernetes Helm - for working with charts, generating repository index
- Gin Web Framework - for HTTP routing
- cli - for command line option parsing
- zap - for logging
"Finally!!"
"ChartMuseum is awesome"
"This is awesome!"
"Oh yes!!!! I’ve been waiting for this for so long. Makes life much easier, especially for the index.yaml creation!"
"I was thinking about writing one of these up myself. This is perfect! thanks!"
"I am jumping for joy over ChartMuseum, a full-fledged Helm repository server with upload!"
"This is really cool ... We currently have a process that generates the index file and then uploads, so this is nice"
"Really a good idea ... really really great, thanks again. I can use nginx to hold the repos and the museum to add/delete the chart. That's a whole life cycle management of chart with the current helm"
"thanks for building the museum!"
GET /index.yaml
- retrieved when you runhelm repo add chartmuseum http://localhost:8080/
GET /charts/mychart-0.1.0.tgz
- retrieved when you runhelm install chartmuseum/mychart
GET /charts/mychart-0.1.0.tgz.prov
- retrieved when you runhelm install
with the--verify
flag
POST /api/charts
- upload a new chart versionPOST /api/prov
- upload a new provenance fileDELETE /api/charts/<name>/<version>
- delete a chart version (and corresponding provenance file)GET /api/charts
- list all chartsGET /api/charts/<name>
- list all versions of a chartGET /api/charts/<name>/<version>
- describe a chart version
GET /
- HTML welcome pageGET /health
- returns 200 OK
Follow "How to Run" section below to get ChartMuseum up and running at http://localhost:8080
First create mychart-0.1.0.tgz
using the Helm CLI:
cd mychart/
helm package .
Upload mychart-0.1.0.tgz
:
curl --data-binary "@mychart-0.1.0.tgz" http://localhost:8080/api/charts
If you've signed your package and generated a provenance file, upload it with:
curl --data-binary "@mychart-0.1.0.tgz.prov" http://localhost:8080/api/prov
Both files can also be uploaded at once (or one at a time) on the /api/charts
route using the multipart/form-data
format:
curl -F "[email protected]" -F "[email protected]" http://localhost:8080/api/charts
You can also use the helm-push plugin:
helm push mychart/ chartmuseum
Add the URL to your ChartMuseum installation to the local repository list:
helm repo add chartmuseum http://localhost:8080
Search for charts:
helm search chartmuseum/
Install chart:
helm install chartmuseum/mychart
Install the binary:
# on Linux
curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum
# on macOS
curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/darwin/amd64/chartmuseum
# on Windows
curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/windows/amd64/chartmuseum
chmod +x ./chartmuseum
mv ./chartmuseum /usr/local/bin
Using latest
in URLs above will get the latest binary (built from master branch).
Replace latest
with $(curl -s https://s3.amazonaws.com/chartmuseum/release/stable.txt)
to automatically determine the latest stable release (e.g. v0.7.1
).
Determine your version with chartmuseum --version
.
Show all CLI options with chartmuseum --help
. Common configurations can be seen below.
All command-line options can be specified as environment variables, which are defined by the command-line option, capitalized, with all -
's replaced with _
's.
For example, the env var STORAGE_AMAZON_BUCKET
can be used in place of --storage-amazon-bucket
.
Make sure your environment is properly setup to access my-s3-bucket
chartmuseum --debug --port=8080 \
--storage="amazon" \
--storage-amazon-bucket="my-s3-bucket" \
--storage-amazon-prefix="" \
--storage-amazon-region="us-east-1"
You need at least the following permissions inside your IAM Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListObjects",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::my-s3-bucket"
},
{
"Sid": "AllowObjectsCRUD",
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-s3-bucket/*"
}
]
}
Make sure your environment is properly setup to access my-gcs-bucket
.
One way to do so is to set the GOOGLE_APPLICATION_CREDENTIALS
var in your environment, pointing to the JSON file containing your service account key:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/[FILE_NAME].json"
More info on Google Cloud authentication can be found here.
chartmuseum --debug --port=8080 \
--storage="google" \
--storage-google-bucket="my-gcs-bucket" \
--storage-google-prefix=""
Make sure your environment is properly setup to access mycontainer
.
To do so, you must set the following env vars:
AZURE_STORAGE_ACCOUNT
AZURE_STORAGE_ACCESS_KEY
chartmuseum --debug --port=8080 \
--storage="microsoft" \
--storage-microsoft-container="mycontainer" \
--storage-microsoft-prefix=""
Make sure your environment is properly setup to access my-oss-bucket
.
To do so, you must set the following env vars:
ALIBABA_CLOUD_ACCESS_KEY_ID
ALIBABA_CLOUD_ACCESS_KEY_SECRET
chartmuseum --debug --port=8080 \
--storage="alibaba" \
--storage-alibaba-bucket="my-oss-bucket" \
--storage-alibaba-prefix="" \
--storage-alibaba-endpoint="oss-cn-beijing.aliyuncs.com"
Make sure your environment is properly setup to access mycontainer
.
To do so, you must set the following env vars (depending on your openstack version):
OS_AUTH_URL
- either
OS_PROJECT_NAME
orOS_TENANT_NAME
orOS_PROJECT_ID
orOS_TENANT_ID
- either
OS_DOMAIN_NAME
orOS_DOMAIN_ID
- either
OS_USERNAME
orOS_USERID
OS_PASSWORD
chartmuseum --debug --port=8080 \
--storage="openstack" \
--storage-openstack-container="mycontainer" \
--storage-openstack-prefix="" \
--storage-openstack-region="myregion"
Make sure you have read-write access to ./chartstorage
(will create if doesn't exist on first upload)
chartmuseum --debug --port=8080 \
--storage="local" \
--storage-local-rootdir="./chartstorage"
If both of the following options are provided, basic http authentication will protect all routes:
--basic-auth-user=<user>
- username for basic http authentication--basic-auth-pass=<pass>
- password for basic http authentication
You may want basic auth to only be applied to operations that can change Charts, i.e. PUT, POST and DELETE. So to avoid basic auth on GET operations use
--auth-anonymous-get
- allow anonymous GET operations
If both of the following options are provided, the server will listen and serve HTTPS:
--tls-cert=<crt>
- path to tls certificate chain file--tls-key=<key>
- path to tls key file
You can specify the --gen-index
option if you only wish to use ChartMuseum to generate your index.yaml file. Note that this will only work with --depth=0
.
The contents of index.yaml will be printed to stdout and the program will exit. This is useful if you are satisfied with your current Helm CI/CD process and/or don't want to monitor another webservice.
--log-json
- output structured logs as json--disable-api
- disable all routes prefixed with /api--disable-statefiles
- disable use of index-cache.yaml--allow-overwrite
- allow chart versions to be re-uploaded without ?force querystring--disable-force-overwrite
- do not allow chart versions to be re-uploaded, even with ?force querystring--chart-url=<url>
- absolute url for .tgzs in index.yaml--storage-amazon-endpoint=<endpoint>
- alternative s3 endpoint--storage-amazon-sse=<algorithm>
- s3 server side encryption algorithm--storage-openstack-cacert=<path>
- path to a custom ca certificates bundle for openstack--chart-post-form-field-name=<field>
- form field which will be queried for the chart file content--prov-post-form-field-name=<field>
- form field which will be queried for the provenance file content--index-limit=<number>
- limit the number of parallel indexers--context-path=<path>
- base context path (new root for application routes)--depth=<number>
- levels of nested repos for multitenancy
Available via Docker Hub.
Example usage (S3):
docker run --rm -it \
-p 8080:8080 \
-e PORT=8080 \
-e DEBUG=1 \
-e STORAGE="amazon" \
-e STORAGE_AMAZON_BUCKET="my-s3-bucket" \
-e STORAGE_AMAZON_PREFIX="" \
-e STORAGE_AMAZON_REGION="us-east-1" \
-v ~/.aws:/root/.aws:ro \
chartmuseum/chartmuseum:latest
There is a Helm chart for ChartMuseum itself which can be found in the official Kubernetes Charts repository.
You can also view it on Kubeapps Hub.
To install:
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm install stable/chartmuseum
If interested in making changes, please submit a PR to kubernetes/charts. Before doing any work, please check for any currently open pull requests. Thanks!
Multitenancy is supported with the --depth
flag.
To begin, start with a directory structure such as
charts
├── org1
│ ├── repoa
│ │ └── nginx-ingress-0.9.3.tgz
├── org2
│ ├── repob
│ │ └── chartmuseum-0.4.0.tgz
This represents a storage layout appropriate for --depth=2
. The organization level can be eliminated by using --depth=1
. The default depth is 0 (singletenant server).
Start the server with --depth=2
, pointing to the charts/
directory:
chartmuseum --debug --depth=2 --storage="local" --storage-local-rootdir=./charts
This example will provide two separate Helm Chart Repositories at the following locations:
http://localhost:8080/org1/repoa
http://localhost:8080/org2/repob
This should work with all supported storage backends.
To use the chart manipulation routes, simply place the name of the repo directly after "/api" in the route:
curl -F "[email protected]" http://localhost:8080/api/org1/repoa/charts
By default, the contents of index.yaml
(per-tenant) will be stored in memory. This means that memory usage will continue to grow indefinitely as more charts are added to storage.
You may wish to offload this to an external cache store, especially for large, multitenant installations.
Example of using Redis as an external cache store:
chartmuseum --debug --port=8080 \
--storage="local" \
--storage-local-rootdir="./chartstorage" \
--cache="redis" \
--cache-redis-addr="localhost:6379" \
--cache-redis-password="" \
--cache-redis-db=0
ChartMuseum exposes its Prometheus metrics at the /metrics
route on the main port. This can be disabled with the --disable-metrics
command-line flag or the DISABLE_METRICS
environment variable.
Note that the Kubernetes chart currently disables metrics by default (
DISABLE_METRICS=true
is set in the chart).
Below are the current application metrics exposed. Note that there is a per tenant (repo) label. The repo label corresponds to the depth parameter, so a depth=2 as the example above would
have repo labels named org1/repoa
and org2/repob
.
Metric | Type | Labels | Description |
---|---|---|---|
chartmuseum_charts_served_total | Gauge | {repo="*"} | Total number of charts |
chartmuseum_charts_versions_served_total | Gauge | {repo="*"} | Total number of chart versions available |
*: see above for repo label
There are other general global metrics harvested (per process, hence for all tenants). You can get the complete list by using the /metrics
route.
Metric | Type | Labels | Description |
---|---|---|---|
chartmuseum_request_duration_seconds | Summary | {quantile="0.5"}, {quantile="0.9"}, {quantile="0.99"} | The HTTP request latencies in seconds |
chartmuseum_request_duration_seconds_sum | |||
chartmuseum_request_duration_seconds_count | |||
chartmuseum_request_size_bytes | Summary | {quantile="0.5"}, {quantile="0.9"}, {quantile="0.99"} | The HTTP request sizes in bytes |
chartmuseum_request_size_bytes_sum | |||
chartmuseum_request_size_bytes_count | |||
chartmuseum_response_size_bytes | Summary | {quantile="0.5"}, {quantile="0.9"}, {quantile="0.99"} | The HTTP response sizes in bytes |
chartmuseum_response_size_bytes_sum | |||
chartmuseum_response_size_bytes_count | |||
go_goroutines | Gauge | Number of goroutines that currently exist |
The repository index (index.yaml) is dynamically generated based on packages found in storage. If you store your own version of index.yaml, it will be completely ignored.
GET /index.yaml
occurs when you run helm repo add chartmuseum http://localhost:8080
or helm repo update
.
If you manually add/remove a .tgz package from storage, it will be immediately reflected in GET /index.yaml
.
You are no longer required to maintain your own version of index.yaml using helm repo index --merge
.
The --gen-index
CLI option (described above) can be used to generate and print index.yaml to stdout.
Upon index regeneration, ChartMuseum will, however, save a statefile in storage called index-cache.yaml
used for cache optimization. This file is only meant for internal use, but may be able to be used for migration to simple storage.
Please see scripts/mirror_k8s_repos.sh
for an example of how to download all .tgz packages from the official Kubernetes repositories (both stable and incubator).
You can then use ChartMuseum to serve up an internal mirror:
scripts/mirror_k8s_repos.sh
chartmuseum --debug --port=8080 --storage="local" --storage-local-rootdir="./mirror"
You can reach the ChartMuseum community and developers in the Kubernetes Slack #chartmuseum channel.