From 3d2817dd0d2fa040401fee4b24ba94c65ca07f13 Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 09:04:02 +0200 Subject: [PATCH 01/10] Add changelog for v1.0.0-rc.1 --- CHANGELOG.md | 35 +++++++++++++++++++++++++++++++++++ 1 file changed, 35 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 85d43d6bd..98fcfc153 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,41 @@ All notable changes to this project are documented in this file. +## 1.0.0-rc.1 (2020-03-03) + +This is a release candidate for Flagger v1.0.0. + +The upgrade procedure from 0.x to 1.0 can be found [here](https://docs.flagger.app/dev/upgrade-guide). + +Two new resources where added to the API: `MetricTemplate` and `AlertProvider`. +The analysis can reference [metric templates](https://docs.flagger.app//usage/metrics#custom-metrics) +to query Prometheus, Datadog and AWS CloudWatch. +[Alerting](https://docs.flagger.app/v/master/usage/alerting#canary-configuration) can be configured on a per +canary basis for Slack, MS Teams, Discord and Rocket. + +#### Features + +- Implement metric templates for Prometheus [#419](https://github.com/weaveworks/flagger/pull/419) + Datadog [#460](https://github.com/weaveworks/flagger/pull/460) and + CloudWatch [#464](https://github.com/weaveworks/flagger/pull/464) +- Implement metric range validation [#424](https://github.com/weaveworks/flagger/pull/424) +- Add support for targeting DaemonSets [#455](https://github.com/weaveworks/flagger/pull/455) +- Implement canary alerts and alert providers (Slack, MS Teams, Discord and Rocket) [#429](https://github.com/weaveworks/flagger/pull/429) + +#### Improvements + +- Add support for Istio multi-cluster [#447](https://github.com/weaveworks/flagger/pull/447) [#450](https://github.com/weaveworks/flagger/pull/450) +- Extend Istio traffic policy [#441](https://github.com/weaveworks/flagger/pull/441), + add support for header operations [#442](https://github.com/weaveworks/flagger/pull/442) and + set ingress destination port when multiple ports are discovered [#436](https://github.com/weaveworks/flagger/pull/436). +- Add support for rollback gating [#449](https://github.com/weaveworks/flagger/pull/449) +- Allow disabling ConfigMaps and Secrets tracking [#425](https://github.com/weaveworks/flagger/pull/425) + +#### Fixes + +- Fix spec changes detection [#446](https://github.com/weaveworks/flagger/pull/446) +- Track projected ConfigMaps and Secrets [#433](https://github.com/weaveworks/flagger/pull/433) + ## 0.23.0 (2020-02-06) Adds support for service name configuration and rollback webhook From 23e6209789065d748c270469f38aaf1fb2449dbd Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 09:39:37 +0200 Subject: [PATCH 02/10] Release Flagger 1.0.0-rc.1 --- artifacts/flagger/deployment.yaml | 2 +- charts/flagger/Chart.yaml | 4 ++-- charts/flagger/values.yaml | 2 +- kustomize/base/flagger/kustomization.yaml | 2 +- pkg/version/version.go | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/artifacts/flagger/deployment.yaml b/artifacts/flagger/deployment.yaml index 801395e3b..4a04cac48 100644 --- a/artifacts/flagger/deployment.yaml +++ b/artifacts/flagger/deployment.yaml @@ -22,7 +22,7 @@ spec: serviceAccountName: flagger containers: - name: flagger - image: weaveworks/flagger:0.23.0 + image: weaveworks/flagger:1.0.0-rc.1 imagePullPolicy: IfNotPresent ports: - name: http diff --git a/charts/flagger/Chart.yaml b/charts/flagger/Chart.yaml index 8b6571830..ba9c734c1 100644 --- a/charts/flagger/Chart.yaml +++ b/charts/flagger/Chart.yaml @@ -1,7 +1,7 @@ apiVersion: v1 name: flagger -version: 0.23.0 -appVersion: 0.23.0 +version: 0.24.0 +appVersion: 1.0.0-rc.1 kubeVersion: ">=1.11.0-0" engine: gotpl description: Flagger is a progressive delivery operator for Kubernetes diff --git a/charts/flagger/values.yaml b/charts/flagger/values.yaml index 3746659e8..24495ef08 100644 --- a/charts/flagger/values.yaml +++ b/charts/flagger/values.yaml @@ -2,7 +2,7 @@ image: repository: weaveworks/flagger - tag: 0.23.0 + tag: 1.0.0-rc.1 pullPolicy: IfNotPresent pullSecret: diff --git a/kustomize/base/flagger/kustomization.yaml b/kustomize/base/flagger/kustomization.yaml index b5b0c1991..293690a84 100644 --- a/kustomize/base/flagger/kustomization.yaml +++ b/kustomize/base/flagger/kustomization.yaml @@ -8,4 +8,4 @@ resources: - deployment.yaml images: - name: weaveworks/flagger - newTag: 0.23.0 + newTag: 1.0.0-rc.1 diff --git a/pkg/version/version.go b/pkg/version/version.go index 9916a29ab..c03a1b15f 100644 --- a/pkg/version/version.go +++ b/pkg/version/version.go @@ -1,4 +1,4 @@ package version -var VERSION = "0.23.0" +var VERSION = "1.0.0-rc.1" var REVISION = "unknown" From eced0f45c6c4df6a0a036e719f47452569509ece Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 11:19:48 +0200 Subject: [PATCH 03/10] Update roadmap and readme --- CHANGELOG.md | 39 ++++++++++------- README.md | 79 +++++++++++++++-------------------- docs/gitbook/README.md | 3 +- docs/gitbook/usage/metrics.md | 14 +++---- test/goreleaser.sh | 2 +- 5 files changed, 67 insertions(+), 70 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 98fcfc153..b79a9e496 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -16,19 +16,21 @@ canary basis for Slack, MS Teams, Discord and Rocket. #### Features -- Implement metric templates for Prometheus [#419](https://github.com/weaveworks/flagger/pull/419) - Datadog [#460](https://github.com/weaveworks/flagger/pull/460) and +- Implement metric templates for Prometheus [#419](https://github.com/weaveworks/flagger/pull/419), + Datadog [#460](https://github.com/weaveworks/flagger/pull/460) and CloudWatch [#464](https://github.com/weaveworks/flagger/pull/464) - Implement metric range validation [#424](https://github.com/weaveworks/flagger/pull/424) - Add support for targeting DaemonSets [#455](https://github.com/weaveworks/flagger/pull/455) -- Implement canary alerts and alert providers (Slack, MS Teams, Discord and Rocket) [#429](https://github.com/weaveworks/flagger/pull/429) +- Implement canary alerts and alert providers (Slack, MS Teams, Discord and Rocket) + [#429](https://github.com/weaveworks/flagger/pull/429) #### Improvements -- Add support for Istio multi-cluster [#447](https://github.com/weaveworks/flagger/pull/447) [#450](https://github.com/weaveworks/flagger/pull/450) +- Add support for Istio multi-cluster + [#447](https://github.com/weaveworks/flagger/pull/447) [#450](https://github.com/weaveworks/flagger/pull/450) - Extend Istio traffic policy [#441](https://github.com/weaveworks/flagger/pull/441), add support for header operations [#442](https://github.com/weaveworks/flagger/pull/442) and - set ingress destination port when multiple ports are discovered [#436](https://github.com/weaveworks/flagger/pull/436). + set ingress destination port when multiple ports are discovered [#436](https://github.com/weaveworks/flagger/pull/436) - Add support for rollback gating [#449](https://github.com/weaveworks/flagger/pull/449) - Allow disabling ConfigMaps and Secrets tracking [#425](https://github.com/weaveworks/flagger/pull/425) @@ -126,7 +128,8 @@ Fixes promql execution and updates the load testing tools ## 0.20.0 (2019-10-21) -Adds support for [A/B Testing](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring) and retry policies when using App Mesh +Adds support for [A/B Testing](https://docs.flagger.app/usage/progressive-delivery#traffic-mirroring) +and retry policies when using App Mesh #### Features @@ -161,7 +164,8 @@ Adds support for canary and blue/green [traffic mirroring](https://docs.flagger. #### Fixes - Fix port discovery diff [#324](https://github.com/weaveworks/flagger/pull/324) -- Helm chart: Enable Prometheus scraping of Flagger metrics [#2141d88](https://github.com/weaveworks/flagger/commit/2141d88ce1cc6be220dab34171c215a334ecde24) +- Helm chart: Enable Prometheus scraping of Flagger metrics + [#2141d88](https://github.com/weaveworks/flagger/commit/2141d88ce1cc6be220dab34171c215a334ecde24) ## 0.18.6 (2019-10-03) @@ -179,7 +183,8 @@ Adds support for App Mesh conformance tests and latency metric checks ## 0.18.5 (2019-10-02) -Adds support for [confirm-promotion](https://docs.flagger.app/how-it-works#webhooks) webhooks and blue/green deployments when using a service mesh +Adds support for [confirm-promotion](https://docs.flagger.app/how-it-works#webhooks) +webhooks and blue/green deployments when using a service mesh #### Features @@ -264,8 +269,10 @@ Adds support for [manual gating](https://docs.flagger.app/how-it-works#manual-ga #### Breaking changes -- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240), when upgrading Flagger the canaries status phase will be reset to `Initialized` -- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs, see [workaround](https://github.com/weaveworks/flagger/issues/223) +- Due to the status sub-resource changes in [#240](https://github.com/weaveworks/flagger/pull/240), + when upgrading Flagger the canaries status phase will be reset to `Initialized` +- Upgrading Flagger with Helm will fail due to Helm poor support of CRDs, + see [workaround](https://github.com/weaveworks/flagger/issues/223) ## 0.17.0 (2019-07-08) @@ -279,12 +286,14 @@ Adds support for Linkerd (SMI Traffic Split API), MS Teams notifications and HA #### Improvements -- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize) installer [#232](https://github.com/weaveworks/flagger/pull/232) +- Add [Kustomize](https://docs.flagger.app/install/flagger-install-on-kubernetes#install-flagger-with-kustomize) + installer [#232](https://github.com/weaveworks/flagger/pull/232) - Add Pod Security Policy to Helm chart [#234](https://github.com/weaveworks/flagger/pull/234) ## 0.16.0 (2019-06-23) -Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green) without a service mesh or ingress controller +Adds support for running [Blue/Green deployments](https://docs.flagger.app/usage/blue-green) +without a service mesh or ingress controller #### Features @@ -316,7 +325,8 @@ Adds support for customising the Istio [traffic policy](https://docs.flagger.app ## 0.14.1 (2019-06-05) -Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing) with Helm test or Bash Bats using pre-rollout hooks +Adds support for running [acceptance/integration tests](https://docs.flagger.app/how-it-works#integration-testing) +with Helm test or Bash Bats using pre-rollout hooks #### Features @@ -363,7 +373,8 @@ Adds support for [NGINX](https://docs.flagger.app/usage/nginx-progressive-delive #### Features - Add support for nginx ingress controller (weighted traffic and A/B testing) [#170](https://github.com/weaveworks/flagger/pull/170) -- Add Prometheus add-on to Flagger Helm chart for App Mesh and NGINX [79b3370](https://github.com/weaveworks/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df) +- Add Prometheus add-on to Flagger Helm chart for App Mesh and + NGINX [79b3370](https://github.com/weaveworks/flagger/pull/170/commits/79b337089294a92961bc8446fd185b38c50a32df) #### Fixes diff --git a/README.md b/README.md index de7841f15..241edc24c 100644 --- a/README.md +++ b/README.md @@ -6,53 +6,40 @@ [![license](https://img.shields.io/github/license/weaveworks/flagger.svg)](https://github.com/weaveworks/flagger/blob/master/LICENSE) [![release](https://img.shields.io/github/release/weaveworks/flagger/all.svg)](https://github.com/weaveworks/flagger/releases) -Flagger is a Kubernetes operator that automates the promotion of canary deployments -using Istio, Linkerd, App Mesh, NGINX, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis. -The canary analysis can be extended with webhooks for running acceptance tests, -load tests or any other custom validation. - -Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance -indicators like HTTP requests success rate, requests average duration and pods health. -Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams. +Flagger is a progressive delivery tool that automates the release process for applications running on Kubernetes. +It reduces the risk of introducing a new software version in production +by gradually shifting traffic to the new version while measuring metrics and running conformance tests. ![flagger-overview](https://raw.githubusercontent.com/weaveworks/flagger/master/docs/diagrams/flagger-canary-overview.png) -## Documentation +Flagger implements several deployment strategies (Canary releases, A/B testing, Blue/Green mirroring) +using a service mesh (App Mesh, Istio, Linkerd) or an ingress controller (Contour, Gloo, NGINX) for traffic routing. +For release analysis, Flagger can query Prometheus, Datadog or CloudWatch +and for alerting it uses Slack, MS Teams, Discord and Rocket. + +### Documentation -Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app) +Flagger documentation can be found at [docs.flagger.app](https://docs.flagger.app). * Install * [Flagger install on Kubernetes](https://docs.flagger.app/install/flagger-install-on-kubernetes) - * [Flagger install on GKE Istio](https://docs.flagger.app/install/flagger-install-on-google-cloud) - * [Flagger install on EKS App Mesh](https://docs.flagger.app/install/flagger-install-on-eks-appmesh) -* How it works - * [Canary custom resource](https://docs.flagger.app/how-it-works#canary-custom-resource) - * [Routing](https://docs.flagger.app/how-it-works#istio-routing) - * [Canary deployment stages](https://docs.flagger.app/how-it-works#canary-deployment) - * [Canary analysis](https://docs.flagger.app/how-it-works#canary-analysis) - * [HTTP metrics](https://docs.flagger.app/how-it-works#http-metrics) - * [Custom metrics](https://docs.flagger.app/how-it-works#custom-metrics) - * [Webhooks](https://docs.flagger.app/how-it-works#webhooks) - * [Load testing](https://docs.flagger.app/how-it-works#load-testing) - * [Manual gating](https://docs.flagger.app/how-it-works#manual-gating) - * [FAQ](https://docs.flagger.app/faq) - * [Development guide](https://docs.flagger.app/dev-guide) * Usage - * [Deployment Strategies](https://docs.flagger.app/usage/deployment-strategies) - * [Monitoring](https://docs.flagger.app/usage/monitoring) + * [How it works](https://docs.flagger.app/usage/how-it-works) + * [Deployment strategies](https://docs.flagger.app/usage/deployment-strategies) + * [Metrics analysis](https://docs.flagger.app/usage/webhooks) + * [Webhooks](https://docs.flagger.app/usage/webhooks) * [Alerting](https://docs.flagger.app/usage/alerting) + * [Monitoring](https://docs.flagger.app/usage/monitoring) * Tutorials - * [Istio Canary Deployments](https://docs.flagger.app/tutorials/istio-progressive-delivery) - * [Istio A/B Testing](https://docs.flagger.app/tutorials/istio-ab-testing) - * [Linkerd Canary Deployments](https://docs.flagger.app/tutorials/linkerd-progressive-delivery) - * [App Mesh Canary Deployments](https://docs.flagger.app/tutorials/appmesh-progressive-delivery) - * [NGINX Canary Deployments](https://docs.flagger.app/tutorials/nginx-progressive-delivery) - * [Gloo Canary Deployments](https://docs.flagger.app/tutorials/gloo-progressive-delivery) - * [Contour Canary Deployments](https://docs.flagger.app/tutorials/contour-progressive-delivery) - * [Kubernetes Blue/Green Deployments](https://docs.flagger.app/tutorials/kubernetes-blue-green) - * [Canary deployments with Helm charts and Weave Flux](https://docs.flagger.app/tutorials/canary-helm-gitops) - -## Who is using Flagger + * [App Mesh](https://docs.flagger.app/tutorials/appmesh-progressive-delivery) + * [Istio](https://docs.flagger.app/tutorials/istio-progressive-delivery) + * [Linkerd](https://docs.flagger.app/tutorials/linkerd-progressive-delivery) + * [Contour](https://docs.flagger.app/tutorials/contour-progressive-delivery) + * [Gloo](https://docs.flagger.app/tutorials/gloo-progressive-delivery) + * [NGINX Ingress](https://docs.flagger.app/tutorials/nginx-progressive-delivery) + * [Kubernetes Blue/Green](https://docs.flagger.app/tutorials/kubernetes-blue-green) + +### Who is using Flagger List of organizations using Flagger: @@ -63,10 +50,10 @@ List of organizations using Flagger: If you are using Flagger, please submit a PR to add your organization to the list! -## Canary CRD +### Canary CRD Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), -then creates a series of objects (Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services). +then creates a series of objects (Kubernetes deployments, ClusterIP services, service mesh or ingress routes). These objects expose the application on the mesh and drive the canary analysis and promotion. Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change. @@ -187,9 +174,9 @@ spec: name: on-call-msteams ``` -For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/how-it-works). +For more details on how the canary analysis and promotion works please [read the docs](https://docs.flagger.app/usage/how-it-works). -## Features +### Features | Feature | Istio | Linkerd | App Mesh | NGINX | Gloo | Contour | CNI | | -------------------------------------------- | ------------------ | ------------------ |------------------ |------------------ |------------------ |------------------ |------------------ | @@ -203,14 +190,16 @@ For more details on how the canary analysis and promotion works please [read the | Custom promql checks | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Traffic policy, CORS, retries and timeouts | :heavy_check_mark: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_minus_sign: | :heavy_check_mark: | :heavy_minus_sign: | -## Roadmap +### Roadmap * Integrate with other service mesh like Consul Connect and ingress controllers like HAProxy, ALB +* Integrate with other metrics providers like InfluxDB, Stackdriver, SignalFX * Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two -## Contributing +### Contributing Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests. +To start contributing please read the [development guide](https://docs.flagger.app/dev-guide). When submitting bug reports please include as much details as possible: @@ -220,7 +209,7 @@ When submitting bug reports please include as much details as possible: * what configuration (canary, ingress and workloads definitions) * what happened (Flagger and Proxy logs) -## Getting Help +### Getting Help If you have any questions about Flagger and progressive delivery: @@ -231,4 +220,4 @@ If you have any questions about Flagger and progressive delivery: hands-on training and meetups in your area. * File an [issue](https://github.com/weaveworks/flagger/issues/new). -Your feedback is always welcome! \ No newline at end of file +Your feedback is always welcome! diff --git a/docs/gitbook/README.md b/docs/gitbook/README.md index 8452986a2..16995256a 100644 --- a/docs/gitbook/README.md +++ b/docs/gitbook/README.md @@ -26,7 +26,7 @@ This project is sponsored by [Weaveworks](https://www.weave.works/) To get started with Flagger, chose one of the supported routing providers and [install](install/flagger-install-on-kubernetes.md) Flagger with Helm or Kustomize. -After install Flagger you can follow one the tutorials: +After install Flagger, you can follow one of the tutorials: **Service mesh tutorials** @@ -45,4 +45,3 @@ After install Flagger you can follow one the tutorials: * [Istio](https://github.com/stefanprodan/gitops-istio) * [Linkerd](https://helm.workshop.flagger.dev) * [AWS App Mesh](https://eks.hands-on.flagger.dev) - diff --git a/docs/gitbook/usage/metrics.md b/docs/gitbook/usage/metrics.md index 512a76c27..7d0d83859 100644 --- a/docs/gitbook/usage/metrics.md +++ b/docs/gitbook/usage/metrics.md @@ -235,18 +235,17 @@ Reference the template in the canary analysis: ``` -### AWS CloudWatch metrics +### Amazon CloudWatch -You can create custom metric checks using the AWS CloudWatch metrics provider. +You can create custom metric checks using the CloudWatch metrics provider. -The template example: +CloudWatch template example: ```yaml apiVersion: flagger.app/v1alpha1 kind: MetricTemplate metadata: name: cloudwatch-error-rate - namespace: istio-system spec: provider: type: cloudwatch @@ -299,20 +298,19 @@ spec: ] ``` -where the query is in the form as in [the AWS' official document](https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-getmetricdata-api/). +The query format documentation can be found [here](https://aws.amazon.com/premiumsupport/knowledge-center/cloudwatch-getmetricdata-api/). Reference the template in the canary analysis: ```yaml analysis: metrics: - - name: "cw custom error rate" + - name: "app error rate" templateRef: name: cloudwatch-error-rate - namespace: istio-system thresholdRange: max: 0.1 interval: 1m ``` -Please note that the flagger need AWS IAM permission to perform `cloudwatch:GetMetricData` to use this provider. +**Note** that Flagger need AWS IAM permission to perform `cloudwatch:GetMetricData` to use this provider. diff --git a/test/goreleaser.sh b/test/goreleaser.sh index f8155be98..3aa37838e 100755 --- a/test/goreleaser.sh +++ b/test/goreleaser.sh @@ -25,4 +25,4 @@ download() { download tar -xf "$TAR_FILE" -C "$TMPDIR" -"${TMPDIR}/goreleaser" --release-notes <(github-release-notes -org weaveworks -repo flagger -since-latest-release) +"${TMPDIR}/goreleaser" --release-notes <(github-release-notes -org weaveworks -repo flagger -since-latest-release -include-author) From e8924a7e27dfcfbf6c9241904773b694474d3bd3 Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 12:39:08 +0200 Subject: [PATCH 04/10] Update podinfo chart to v1beta1 API --- charts/podinfo/Chart.yaml | 2 +- charts/podinfo/templates/canary.yaml | 14 ++++---------- charts/podinfo/templates/deployment.yaml | 2 ++ charts/podinfo/templates/hpa.yaml | 8 +------- 4 files changed, 8 insertions(+), 18 deletions(-) diff --git a/charts/podinfo/Chart.yaml b/charts/podinfo/Chart.yaml index 1aa57f108..0d52ce603 100644 --- a/charts/podinfo/Chart.yaml +++ b/charts/podinfo/Chart.yaml @@ -1,5 +1,5 @@ apiVersion: v1 -version: 3.1.0 +version: 3.1.1 appVersion: 3.1.0 name: podinfo engine: gotpl diff --git a/charts/podinfo/templates/canary.yaml b/charts/podinfo/templates/canary.yaml index c1176a17b..bb6441427 100644 --- a/charts/podinfo/templates/canary.yaml +++ b/charts/podinfo/templates/canary.yaml @@ -1,5 +1,5 @@ {{- if .Values.canary.enabled }} -apiVersion: flagger.app/v1alpha3 +apiVersion: flagger.app/v1beta1 kind: Canary metadata: name: {{ template "podinfo.fullname" . }} @@ -13,7 +13,6 @@ spec: apiVersion: apps/v1 kind: Deployment name: {{ template "podinfo.fullname" . }} - progressDeadlineSeconds: 60 autoscalerRef: apiVersion: autoscaling/v2beta1 kind: HorizontalPodAutoscaler @@ -29,7 +28,7 @@ spec: trafficPolicy: tls: mode: {{ .Values.canary.istioTLS }} - canaryAnalysis: + analysis: interval: {{ .Values.canary.analysis.interval }} threshold: {{ .Values.canary.analysis.threshold }} maxWeight: {{ .Values.canary.analysis.maxWeight }} @@ -48,8 +47,8 @@ spec: url: {{ .Values.canary.helmtest.url }} timeout: 3m metadata: - type: "helm" - cmd: "test {{ .Release.Name }} --cleanup" + type: "helmv3" + cmd: "test {{ .Release.Name }} -n {{ .Release.Namespace }}" {{- end }} {{- if .Values.canary.loadtest.enabled }} - name: load-test-get @@ -57,10 +56,5 @@ spec: timeout: 5s metadata: cmd: "hey -z 1m -q 5 -c 2 http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}" - - name: load-test-post - url: {{ .Values.canary.loadtest.url }} - timeout: 5s - metadata: - cmd: "hey -z 1m -q 5 -c 2 -m POST -d '{\"test\": true}' http://{{ template "podinfo.fullname" . }}.{{ .Release.Namespace }}:{{ .Values.service.port }}/echo" {{- end }} {{- end }} \ No newline at end of file diff --git a/charts/podinfo/templates/deployment.yaml b/charts/podinfo/templates/deployment.yaml index 969e2a8a2..85c1955ce 100644 --- a/charts/podinfo/templates/deployment.yaml +++ b/charts/podinfo/templates/deployment.yaml @@ -41,6 +41,8 @@ spec: - --backend-url={{ . }} {{- end }} env: + - name: PODINFO_UI_COLOR + value: "#34577c" {{- if .Values.message }} - name: PODINFO_UI_MESSAGE value: {{ .Values.message }} diff --git a/charts/podinfo/templates/hpa.yaml b/charts/podinfo/templates/hpa.yaml index 9905cda64..8526f1622 100644 --- a/charts/podinfo/templates/hpa.yaml +++ b/charts/podinfo/templates/hpa.yaml @@ -10,7 +10,7 @@ metadata: heritage: {{ .Release.Service }} spec: scaleTargetRef: - apiVersion: apps/v1beta2 + apiVersion: apps/v1 kind: Deployment name: {{ template "podinfo.fullname" . }} minReplicas: {{ .Values.hpa.minReplicas }} @@ -28,10 +28,4 @@ spec: name: memory targetAverageValue: {{ .Values.hpa.memory }} {{- end }} - {{- if .Values.hpa.requests }} - - type: Pod - pods: - metricName: http_requests - targetAverageValue: {{ .Values.hpa.requests }} - {{- end }} {{- end }} From 4f0f7ff9db084125af511cf0afcb3ef1ddff59d1 Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 13:25:57 +0200 Subject: [PATCH 05/10] Update examples to v1beta1 API --- artifacts/appmesh/canary.yaml | 70 -- artifacts/appmesh/deployment.yaml | 65 -- artifacts/appmesh/global-mesh.yaml | 6 - artifacts/appmesh/hpa.yaml | 19 - artifacts/appmesh/ingress.yaml | 172 ---- artifacts/canaries/abtest.yaml | 67 -- artifacts/canaries/canary.yaml | 88 -- artifacts/canaries/deployment.yaml | 68 -- artifacts/canaries/hpa.yaml | 19 - artifacts/cluster/namespaces/test.yaml | 6 - artifacts/cluster/releases/test/backend.yaml | 26 - artifacts/cluster/releases/test/frontend.yaml | 27 - .../cluster/releases/test/loadtester.yaml | 18 - artifacts/eks/appmesh-prometheus.yaml | 264 ------ artifacts/examples/appmesh-abtest.yaml | 62 ++ artifacts/examples/appmesh-canary.yaml | 59 ++ artifacts/examples/istio-abtest.yaml | 70 ++ artifacts/examples/istio-canary.yaml | 66 ++ artifacts/examples/linkerd-canary.yaml | 52 ++ artifacts/gke/istio-gateway.yaml | 27 - artifacts/gke/istio-prometheus.yaml | 834 ------------------ artifacts/gloo/canary.yaml | 52 -- artifacts/gloo/virtual-service.yaml | 17 - artifacts/helmtester/deployment.yaml | 58 -- artifacts/helmtester/service.yaml | 16 - artifacts/loadtester/config.yaml | 19 - artifacts/loadtester/deployment.yaml | 67 -- artifacts/loadtester/service.yaml | 15 - artifacts/namespaces/test.yaml | 7 - artifacts/nginx/canary.yaml | 70 -- artifacts/nginx/ingress.yaml | 17 - 31 files changed, 309 insertions(+), 2114 deletions(-) delete mode 100644 artifacts/appmesh/canary.yaml delete mode 100644 artifacts/appmesh/deployment.yaml delete mode 100644 artifacts/appmesh/global-mesh.yaml delete mode 100644 artifacts/appmesh/hpa.yaml delete mode 100644 artifacts/appmesh/ingress.yaml delete mode 100644 artifacts/canaries/abtest.yaml delete mode 100644 artifacts/canaries/canary.yaml delete mode 100644 artifacts/canaries/deployment.yaml delete mode 100644 artifacts/canaries/hpa.yaml delete mode 100644 artifacts/cluster/namespaces/test.yaml delete mode 100644 artifacts/cluster/releases/test/backend.yaml delete mode 100644 artifacts/cluster/releases/test/frontend.yaml delete mode 100644 artifacts/cluster/releases/test/loadtester.yaml delete mode 100644 artifacts/eks/appmesh-prometheus.yaml create mode 100644 artifacts/examples/appmesh-abtest.yaml create mode 100644 artifacts/examples/appmesh-canary.yaml create mode 100644 artifacts/examples/istio-abtest.yaml create mode 100644 artifacts/examples/istio-canary.yaml create mode 100644 artifacts/examples/linkerd-canary.yaml delete mode 100644 artifacts/gke/istio-gateway.yaml delete mode 100644 artifacts/gke/istio-prometheus.yaml delete mode 100644 artifacts/gloo/canary.yaml delete mode 100644 artifacts/gloo/virtual-service.yaml delete mode 100644 artifacts/helmtester/deployment.yaml delete mode 100644 artifacts/helmtester/service.yaml delete mode 100644 artifacts/loadtester/config.yaml delete mode 100644 artifacts/loadtester/deployment.yaml delete mode 100644 artifacts/loadtester/service.yaml delete mode 100644 artifacts/namespaces/test.yaml delete mode 100644 artifacts/nginx/canary.yaml delete mode 100644 artifacts/nginx/ingress.yaml diff --git a/artifacts/appmesh/canary.yaml b/artifacts/appmesh/canary.yaml deleted file mode 100644 index e1db13142..000000000 --- a/artifacts/appmesh/canary.yaml +++ /dev/null @@ -1,70 +0,0 @@ -apiVersion: flagger.app/v1alpha3 -kind: Canary -metadata: - name: podinfo - namespace: test -spec: - # deployment reference - targetRef: - apiVersion: apps/v1 - kind: Deployment - name: podinfo - # the maximum time in seconds for the canary deployment - # to make progress before it is rollback (default 600s) - progressDeadlineSeconds: 60 - # HPA reference (optional) - autoscalerRef: - apiVersion: autoscaling/v2beta1 - kind: HorizontalPodAutoscaler - name: podinfo - service: - # container port - port: 9898 - # container port name (optional) - # can be http or grpc - portName: http - # App Mesh reference - meshName: global - # App Mesh retry policy (optional) - retries: - attempts: 3 - perTryTimeout: 1s - retryOn: "gateway-error,client-error,stream-error" - # define the canary analysis timing and KPIs - canaryAnalysis: - # schedule interval (default 60s) - interval: 10s - # max number of failed metric checks before rollback - threshold: 10 - # max traffic percentage routed to canary - # percentage (0-100) - maxWeight: 50 - # canary increment step - # percentage (0-100) - stepWeight: 5 - # App Mesh Prometheus checks - metrics: - - name: request-success-rate - # minimum req success rate (non 5xx responses) - # percentage (0-100) - threshold: 99 - interval: 1m - - name: request-duration - # maximum req duration P99 - # milliseconds - threshold: 500 - interval: 30s - # testing (optional) - webhooks: - - name: acceptance-test - type: pre-rollout - url: http://flagger-loadtester.test/ - timeout: 30s - metadata: - type: bash - cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token" - - name: load-test - url: http://flagger-loadtester.test/ - timeout: 5s - metadata: - cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/" diff --git a/artifacts/appmesh/deployment.yaml b/artifacts/appmesh/deployment.yaml deleted file mode 100644 index 5d1ee25ff..000000000 --- a/artifacts/appmesh/deployment.yaml +++ /dev/null @@ -1,65 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: podinfo - namespace: test - labels: - app: podinfo -spec: - minReadySeconds: 5 - revisionHistoryLimit: 5 - progressDeadlineSeconds: 60 - strategy: - rollingUpdate: - maxUnavailable: 0 - type: RollingUpdate - selector: - matchLabels: - app: podinfo - template: - metadata: - annotations: - prometheus.io/scrape: "true" - labels: - app: podinfo - spec: - containers: - - name: podinfod - image: stefanprodan/podinfo:3.1.0 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9898 - name: http - protocol: TCP - command: - - ./podinfo - - --port=9898 - - --level=info - env: - - name: PODINFO_UI_COLOR - value: blue - livenessProbe: - exec: - command: - - podcli - - check - - http - - localhost:9898/healthz - initialDelaySeconds: 5 - timeoutSeconds: 5 - readinessProbe: - exec: - command: - - podcli - - check - - http - - localhost:9898/readyz - initialDelaySeconds: 5 - timeoutSeconds: 5 - resources: - limits: - cpu: 2000m - memory: 512Mi - requests: - cpu: 100m - memory: 64Mi diff --git a/artifacts/appmesh/global-mesh.yaml b/artifacts/appmesh/global-mesh.yaml deleted file mode 100644 index 01d6c8ff2..000000000 --- a/artifacts/appmesh/global-mesh.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: appmesh.k8s.aws/v1beta1 -kind: Mesh -metadata: - name: global -spec: - serviceDiscoveryType: dns diff --git a/artifacts/appmesh/hpa.yaml b/artifacts/appmesh/hpa.yaml deleted file mode 100644 index fa2b5a6f4..000000000 --- a/artifacts/appmesh/hpa.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: podinfo - namespace: test -spec: - scaleTargetRef: - apiVersion: apps/v1 - kind: Deployment - name: podinfo - minReplicas: 2 - maxReplicas: 4 - metrics: - - type: Resource - resource: - name: cpu - # scale up if usage is above - # 99% of the requested CPU (100m) - targetAverageUtilization: 99 diff --git a/artifacts/appmesh/ingress.yaml b/artifacts/appmesh/ingress.yaml deleted file mode 100644 index b4d69eb25..000000000 --- a/artifacts/appmesh/ingress.yaml +++ /dev/null @@ -1,172 +0,0 @@ ---- -kind: ConfigMap -apiVersion: v1 -metadata: - name: ingress-config - namespace: test - labels: - app: ingress -data: - envoy.yaml: | - static_resources: - listeners: - - address: - socket_address: - address: 0.0.0.0 - port_value: 8080 - filter_chains: - - filters: - - name: envoy.http_connection_manager - config: - access_log: - - name: envoy.file_access_log - config: - path: /dev/stdout - codec_type: auto - stat_prefix: ingress_http - http_filters: - - name: envoy.router - config: {} - route_config: - name: local_route - virtual_hosts: - - name: local_service - domains: ["*"] - routes: - - match: - prefix: "/" - route: - cluster: podinfo - host_rewrite: podinfo.test - timeout: 15s - retry_policy: - retry_on: "gateway-error,connect-failure,refused-stream" - num_retries: 10 - per_try_timeout: 5s - clusters: - - name: podinfo - connect_timeout: 0.30s - type: strict_dns - lb_policy: round_robin - load_assignment: - cluster_name: podinfo - endpoints: - - lb_endpoints: - - endpoint: - address: - socket_address: - address: podinfo.test - port_value: 9898 - admin: - access_log_path: /dev/null - address: - socket_address: - address: 0.0.0.0 - port_value: 9999 ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: ingress - namespace: test - labels: - app: ingress -spec: - replicas: 1 - selector: - matchLabels: - app: ingress - strategy: - type: RollingUpdate - rollingUpdate: - maxUnavailable: 0 - template: - metadata: - labels: - app: ingress - annotations: - prometheus.io/path: "/stats/prometheus" - prometheus.io/port: "9999" - prometheus.io/scrape: "true" - # dummy port to exclude ingress from mesh traffic - # only egress should go over the mesh - appmesh.k8s.aws/ports: "444" - spec: - terminationGracePeriodSeconds: 30 - containers: - - name: ingress - image: "envoyproxy/envoy-alpine:v1.11.1" - securityContext: - capabilities: - drop: - - ALL - add: - - NET_BIND_SERVICE - command: - - /usr/local/bin/envoy - args: - - -l - - $loglevel - - -c - - /config/envoy.yaml - - --base-id - - "1234" - ports: - - name: admin - containerPort: 9999 - protocol: TCP - - name: http - containerPort: 8080 - protocol: TCP - livenessProbe: - initialDelaySeconds: 5 - tcpSocket: - port: admin - readinessProbe: - initialDelaySeconds: 5 - tcpSocket: - port: admin - resources: - requests: - cpu: 100m - memory: 64Mi - volumeMounts: - - name: config - mountPath: /config - volumes: - - name: config - configMap: - name: ingress-config ---- -kind: Service -apiVersion: v1 -metadata: - name: ingress - namespace: test -spec: - selector: - app: ingress - ports: - - protocol: TCP - name: http - port: 80 - targetPort: http - type: LoadBalancer ---- -apiVersion: appmesh.k8s.aws/v1beta1 -kind: VirtualNode -metadata: - name: ingress - namespace: test -spec: - meshName: global - listeners: - - portMapping: - port: 80 - protocol: http - serviceDiscovery: - dns: - hostName: ingress.test - backends: - - virtualService: - virtualServiceName: podinfo.test \ No newline at end of file diff --git a/artifacts/canaries/abtest.yaml b/artifacts/canaries/abtest.yaml deleted file mode 100644 index 65afa9296..000000000 --- a/artifacts/canaries/abtest.yaml +++ /dev/null @@ -1,67 +0,0 @@ -apiVersion: flagger.app/v1alpha3 -kind: Canary -metadata: - name: podinfo - namespace: test -spec: - # deployment reference - targetRef: - apiVersion: apps/v1 - kind: Deployment - name: podinfo - # the maximum time in seconds for the canary deployment - # to make progress before it is rollback (default 600s) - progressDeadlineSeconds: 60 - # HPA reference (optional) - autoscalerRef: - apiVersion: autoscaling/v2beta1 - kind: HorizontalPodAutoscaler - name: podinfo - service: - # container port - port: 9898 - # Istio gateways (optional) - gateways: - - public-gateway.istio-system.svc.cluster.local - - mesh - # Istio virtual service host names (optional) - hosts: - - app.example.com - # Istio traffic policy (optional) - trafficPolicy: - tls: - # use ISTIO_MUTUAL when mTLS is enabled - mode: DISABLE - canaryAnalysis: - # schedule interval (default 60s) - interval: 10s - # max number of failed metric checks before rollback - threshold: 10 - # total number of iterations - iterations: 10 - # canary match condition - match: - - headers: - cookie: - regex: "^(.*?;)?(type=insider)(;.*)?$" - - headers: - user-agent: - regex: "(?=.*Safari)(?!.*Chrome).*$" - metrics: - - name: request-success-rate - # minimum req success rate (non 5xx responses) - # percentage (0-100) - threshold: 99 - interval: 1m - - name: request-duration - # maximum req duration P99 - # milliseconds - threshold: 500 - interval: 30s - # external checks (optional) - webhooks: - - name: load-test - url: http://flagger-loadtester.test/ - timeout: 5s - metadata: - cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test:9898/" diff --git a/artifacts/canaries/canary.yaml b/artifacts/canaries/canary.yaml deleted file mode 100644 index 9fdc9c214..000000000 --- a/artifacts/canaries/canary.yaml +++ /dev/null @@ -1,88 +0,0 @@ -apiVersion: flagger.app/v1alpha3 -kind: Canary -metadata: - name: podinfo - namespace: test -spec: - # service mesh provider (default istio) - # can be: kubernetes, istio, appmesh, smi, nginx, gloo, supergloo - # use the kubernetes provider for Blue/Green style deployments - provider: istio - # deployment reference - targetRef: - apiVersion: apps/v1 - kind: Deployment - name: podinfo - # the maximum time in seconds for the canary deployment - # to make progress before it is rollback (default 600s) - progressDeadlineSeconds: 60 - # HPA reference (optional) - autoscalerRef: - apiVersion: autoscaling/v2beta1 - kind: HorizontalPodAutoscaler - name: podinfo - service: - # container port - port: 9898 - # port name can be http or grpc (default http) - portName: http - # add all the other container ports - # when generating ClusterIP services (default false) - portDiscovery: false - # Istio gateways (optional) - gateways: - - public-gateway.istio-system.svc.cluster.local - # remove the mesh gateway if the public host is - # shared across multiple virtual services - - mesh - # Istio virtual service host names (optional) - hosts: - - app.example.com - # Istio traffic policy (optional) - trafficPolicy: - tls: - # use ISTIO_MUTUAL when mTLS is enabled - mode: DISABLE - # HTTP match conditions (optional) - match: - - uri: - prefix: / - # HTTP rewrite (optional) - rewrite: - uri: / - # HTTP timeout (optional) - timeout: 30s - # promote the canary without analysing it (default false) - skipAnalysis: false - canaryAnalysis: - # schedule interval (default 60s) - interval: 10s - # max number of failed metric checks before rollback - threshold: 10 - # max traffic percentage routed to canary - # percentage (0-100) - maxWeight: 50 - # canary increment step - # percentage (0-100) - stepWeight: 5 - # Prometheus checks - metrics: - - name: request-success-rate - # minimum req success rate (non 5xx responses) - # percentage (0-100) - threshold: 99 - interval: 1m - - name: request-duration - # maximum req duration P99 - # milliseconds - threshold: 500 - interval: 30s - # external checks (optional) - webhooks: - - name: load-test - url: http://flagger-loadtester.test/ - timeout: 5s - metadata: - type: cmd - cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/" - logCmdOutput: "true" diff --git a/artifacts/canaries/deployment.yaml b/artifacts/canaries/deployment.yaml deleted file mode 100644 index 602fd4770..000000000 --- a/artifacts/canaries/deployment.yaml +++ /dev/null @@ -1,68 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: podinfo - namespace: test - labels: - app: podinfo -spec: - minReadySeconds: 5 - revisionHistoryLimit: 5 - progressDeadlineSeconds: 60 - strategy: - rollingUpdate: - maxUnavailable: 0 - type: RollingUpdate - selector: - matchLabels: - app: podinfo - template: - metadata: - annotations: - prometheus.io/scrape: "true" - prometheus.io/port: "9898" - labels: - app: podinfo - spec: - containers: - - name: podinfod - image: stefanprodan/podinfo:3.1.0 - imagePullPolicy: IfNotPresent - ports: - - containerPort: 9898 - name: http - protocol: TCP - command: - - ./podinfo - - --port=9898 - - --level=info - - --random-delay=false - - --random-error=false - env: - - name: PODINFO_UI_COLOR - value: blue - livenessProbe: - exec: - command: - - podcli - - check - - http - - localhost:9898/healthz - initialDelaySeconds: 5 - timeoutSeconds: 5 - readinessProbe: - exec: - command: - - podcli - - check - - http - - localhost:9898/readyz - initialDelaySeconds: 5 - timeoutSeconds: 5 - resources: - limits: - cpu: 2000m - memory: 512Mi - requests: - cpu: 100m - memory: 64Mi diff --git a/artifacts/canaries/hpa.yaml b/artifacts/canaries/hpa.yaml deleted file mode 100644 index fa2b5a6f4..000000000 --- a/artifacts/canaries/hpa.yaml +++ /dev/null @@ -1,19 +0,0 @@ -apiVersion: autoscaling/v2beta1 -kind: HorizontalPodAutoscaler -metadata: - name: podinfo - namespace: test -spec: - scaleTargetRef: - apiVersion: apps/v1 - kind: Deployment - name: podinfo - minReplicas: 2 - maxReplicas: 4 - metrics: - - type: Resource - resource: - name: cpu - # scale up if usage is above - # 99% of the requested CPU (100m) - targetAverageUtilization: 99 diff --git a/artifacts/cluster/namespaces/test.yaml b/artifacts/cluster/namespaces/test.yaml deleted file mode 100644 index 6126d753f..000000000 --- a/artifacts/cluster/namespaces/test.yaml +++ /dev/null @@ -1,6 +0,0 @@ -apiVersion: v1 -kind: Namespace -metadata: - name: test - labels: - istio-injection: enabled diff --git a/artifacts/cluster/releases/test/backend.yaml b/artifacts/cluster/releases/test/backend.yaml deleted file mode 100644 index 79ac9bbbf..000000000 --- a/artifacts/cluster/releases/test/backend.yaml +++ /dev/null @@ -1,26 +0,0 @@ -apiVersion: flux.weave.works/v1beta1 -kind: HelmRelease -metadata: - name: backend - namespace: test - annotations: - flux.weave.works/automated: "true" - flux.weave.works/tag.chart-image: regexp:^1.7.* -spec: - releaseName: backend - chart: - repository: https://flagger.app/ - name: podinfo - version: 2.2.0 - values: - image: - repository: quay.io/stefanprodan/podinfo - tag: 1.7.0 - httpServer: - timeout: 30s - canary: - enabled: true - istioIngress: - enabled: false - loadtest: - enabled: true diff --git a/artifacts/cluster/releases/test/frontend.yaml b/artifacts/cluster/releases/test/frontend.yaml deleted file mode 100644 index 0a62c8959..000000000 --- a/artifacts/cluster/releases/test/frontend.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: flux.weave.works/v1beta1 -kind: HelmRelease -metadata: - name: frontend - namespace: test - annotations: - flux.weave.works/automated: "true" - flux.weave.works/tag.chart-image: semver:~1.7 -spec: - releaseName: frontend - chart: - repository: https://flagger.app/ - name: podinfo - version: 2.2.0 - values: - image: - repository: quay.io/stefanprodan/podinfo - tag: 1.7.0 - backend: http://backend-podinfo:9898/echo - canary: - enabled: true - istioIngress: - enabled: true - gateway: public-gateway.istio-system.svc.cluster.local - host: frontend.istio.example.com - loadtest: - enabled: true diff --git a/artifacts/cluster/releases/test/loadtester.yaml b/artifacts/cluster/releases/test/loadtester.yaml deleted file mode 100644 index bd742d604..000000000 --- a/artifacts/cluster/releases/test/loadtester.yaml +++ /dev/null @@ -1,18 +0,0 @@ -apiVersion: flux.weave.works/v1beta1 -kind: HelmRelease -metadata: - name: loadtester - namespace: test - annotations: - flux.weave.works/automated: "true" - flux.weave.works/tag.chart-image: glob:0.* -spec: - releaseName: flagger-loadtester - chart: - repository: https://flagger.app/ - name: loadtester - version: 0.6.0 - values: - image: - repository: weaveworks/flagger-loadtester - tag: 0.6.1 diff --git a/artifacts/eks/appmesh-prometheus.yaml b/artifacts/eks/appmesh-prometheus.yaml deleted file mode 100644 index c9386d6fc..000000000 --- a/artifacts/eks/appmesh-prometheus.yaml +++ /dev/null @@ -1,264 +0,0 @@ ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: prometheus - labels: - app: prometheus -rules: - - apiGroups: [""] - resources: - - nodes - - services - - endpoints - - pods - - nodes/proxy - verbs: ["get", "list", "watch"] - - apiGroups: [""] - resources: - - configmaps - verbs: ["get"] - - nonResourceURLs: ["/metrics"] - verbs: ["get"] ---- -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: prometheus - labels: - app: prometheus -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: prometheus -subjects: - - kind: ServiceAccount - name: prometheus - namespace: appmesh-system ---- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: prometheus - namespace: appmesh-system - labels: - app: prometheus ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: prometheus - namespace: appmesh-system - labels: - app: prometheus -data: - prometheus.yml: |- - global: - scrape_interval: 5s - scrape_configs: - - # Scrape config for AppMesh Envoy sidecar - - job_name: 'appmesh-envoy' - metrics_path: /stats/prometheus - kubernetes_sd_configs: - - role: pod - - relabel_configs: - - source_labels: [__meta_kubernetes_pod_container_name] - action: keep - regex: '^envoy$' - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: ${1}:9901 - target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: kubernetes_namespace - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: kubernetes_pod_name - - # Exclude high cardinality metrics - metric_relabel_configs: - - source_labels: [ cluster_name ] - regex: '(outbound|inbound|prometheus_stats).*' - action: drop - - source_labels: [ tcp_prefix ] - regex: '(outbound|inbound|prometheus_stats).*' - action: drop - - source_labels: [ listener_address ] - regex: '(.+)' - action: drop - - source_labels: [ http_conn_manager_listener_prefix ] - regex: '(.+)' - action: drop - - source_labels: [ http_conn_manager_prefix ] - regex: '(.+)' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_tls.*' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_tcp_downstream.*' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_http_(stats|admin).*' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*' - action: drop - - # Scrape config for API servers - - job_name: 'kubernetes-apiservers' - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - default - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: kubernetes;https - - # Scrape config for nodes - - job_name: 'kubernetes-nodes' - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics - - # scrape config for cAdvisor - - job_name: 'kubernetes-cadvisor' - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - - # scrape config for pods - - job_name: kubernetes-pods - kubernetes_sd_configs: - - role: pod - relabel_configs: - - action: keep - regex: true - source_labels: - - __meta_kubernetes_pod_annotation_prometheus_io_scrape - - source_labels: [ __address__ ] - regex: '.*9901.*' - action: drop - - action: replace - regex: (.+) - source_labels: - - __meta_kubernetes_pod_annotation_prometheus_io_path - target_label: __metrics_path__ - - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:$2 - source_labels: - - __address__ - - __meta_kubernetes_pod_annotation_prometheus_io_port - target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - action: replace - source_labels: - - __meta_kubernetes_namespace - target_label: kubernetes_namespace - - action: replace - source_labels: - - __meta_kubernetes_pod_name - target_label: kubernetes_pod_name ---- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: prometheus - namespace: appmesh-system - labels: - app: prometheus -spec: - replicas: 1 - selector: - matchLabels: - app: prometheus - template: - metadata: - labels: - app: prometheus - annotations: - version: "appmesh-v1alpha1" - spec: - serviceAccountName: prometheus - containers: - - name: prometheus - image: "docker.io/prom/prometheus:v2.7.1" - imagePullPolicy: IfNotPresent - args: - - '--storage.tsdb.retention=6h' - - '--config.file=/etc/prometheus/prometheus.yml' - ports: - - containerPort: 9090 - name: http - livenessProbe: - httpGet: - path: /-/healthy - port: 9090 - readinessProbe: - httpGet: - path: /-/ready - port: 9090 - resources: - requests: - cpu: 10m - memory: 128Mi - volumeMounts: - - name: config-volume - mountPath: /etc/prometheus - volumes: - - name: config-volume - configMap: - name: prometheus ---- -apiVersion: v1 -kind: Service -metadata: - name: prometheus - namespace: appmesh-system - labels: - name: prometheus -spec: - selector: - app: prometheus - ports: - - name: http - protocol: TCP - port: 9090 diff --git a/artifacts/examples/appmesh-abtest.yaml b/artifacts/examples/appmesh-abtest.yaml new file mode 100644 index 000000000..6fddb07ec --- /dev/null +++ b/artifacts/examples/appmesh-abtest.yaml @@ -0,0 +1,62 @@ +apiVersion: flagger.app/v1beta1 +kind: Canary +metadata: + name: podinfo + namespace: test +spec: + provider: appmesh + progressDeadlineSeconds: 600 + targetRef: + apiVersion: apps/v1 + kind: Deployment + name: podinfo + autoscalerRef: + apiVersion: autoscaling/v2beta1 + kind: HorizontalPodAutoscaler + name: podinfo + service: + port: 80 + targetPort: 9898 + meshName: global + retries: + attempts: 3 + perTryTimeout: 5s + retryOn: "gateway-error,client-error,stream-error" + timeout: 35s + match: + - uri: + prefix: / + rewrite: + uri: / + analysis: + interval: 15s + threshold: 10 + iterations: 10 + match: + - headers: + x-canary: + exact: "insider" + metrics: + - name: request-success-rate + thresholdRange: + min: 99 + interval: 1m + - name: request-duration + thresholdRange: + max: 500 + interval: 30s + webhooks: + - name: conformance-test + type: pre-rollout + url: http://flagger-loadtester.test/ + timeout: 15s + metadata: + type: "bash" + cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token" + - name: load-test + type: rollout + url: http://flagger-loadtester.test/ + timeout: 5s + metadata: + type: cmd + cmd: "hey -z 1m -q 10 -c 2 -H 'X-Canary: insider' http://podinfo-canary.test/" diff --git a/artifacts/examples/appmesh-canary.yaml b/artifacts/examples/appmesh-canary.yaml new file mode 100644 index 000000000..dcf40c437 --- /dev/null +++ b/artifacts/examples/appmesh-canary.yaml @@ -0,0 +1,59 @@ +apiVersion: flagger.app/v1beta1 +kind: Canary +metadata: + name: podinfo + namespace: test +spec: + provider: appmesh + progressDeadlineSeconds: 600 + targetRef: + apiVersion: apps/v1 + kind: Deployment + name: podinfo + autoscalerRef: + apiVersion: autoscaling/v2beta1 + kind: HorizontalPodAutoscaler + name: podinfo + service: + port: 80 + targetPort: http + meshName: global + retries: + attempts: 3 + perTryTimeout: 5s + retryOn: "gateway-error,client-error,stream-error" + timeout: 35s + match: + - uri: + prefix: / + rewrite: + uri: / + analysis: + interval: 15s + threshold: 10 + maxWeight: 50 + stepWeight: 5 + metrics: + - name: request-success-rate + thresholdRange: + min: 99 + interval: 1m + - name: request-duration + thresholdRange: + max: 500 + interval: 30s + webhooks: + - name: conformance-test + type: pre-rollout + url: http://flagger-loadtester.test/ + timeout: 15s + metadata: + type: "bash" + cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token" + - name: load-test + type: rollout + url: http://flagger-loadtester.test/ + timeout: 5s + metadata: + type: cmd + cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/" diff --git a/artifacts/examples/istio-abtest.yaml b/artifacts/examples/istio-abtest.yaml new file mode 100644 index 000000000..f0492bc23 --- /dev/null +++ b/artifacts/examples/istio-abtest.yaml @@ -0,0 +1,70 @@ +apiVersion: flagger.app/v1beta1 +kind: Canary +metadata: + name: podinfo + namespace: test +spec: + provider: istio + targetRef: + apiVersion: apps/v1 + kind: Deployment + name: podinfo + autoscalerRef: + apiVersion: autoscaling/v2beta1 + kind: HorizontalPodAutoscaler + name: podinfo + service: + name: podinfo + port: 80 + targetPort: 9898 + portName: http + portDiscovery: true + gateways: + - public-gateway.istio-system.svc.cluster.local + - mesh + hosts: + - app.example.com + trafficPolicy: + tls: + mode: DISABLE + match: + - uri: + prefix: / + rewrite: + uri: / + timeout: 30s + analysis: + interval: 15s + threshold: 10 + iterations: 10 + match: + - headers: + cookie: + regex: "^(.*?;)?(type=insider)(;.*)?$" + - headers: + user-agent: + regex: "(?=.*Safari)(?!.*Chrome).*$" + metrics: + - name: request-success-rate + thresholdRange: + min: 99 + interval: 1m + - name: request-duration + thresholdRange: + max: 500 + interval: 30s + webhooks: + - name: conformance-test + type: pre-rollout + url: http://flagger-loadtester.test/ + timeout: 15s + metadata: + type: "bash" + cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token" + - name: load-test + type: rollout + url: http://flagger-loadtester.test/ + timeout: 5s + metadata: + type: cmd + cmd: "hey -z 1m -q 10 -c 2 -H 'Cookie: type=insider' http://podinfo.test/" diff --git a/artifacts/examples/istio-canary.yaml b/artifacts/examples/istio-canary.yaml new file mode 100644 index 000000000..6f5760980 --- /dev/null +++ b/artifacts/examples/istio-canary.yaml @@ -0,0 +1,66 @@ +apiVersion: flagger.app/v1beta1 +kind: Canary +metadata: + name: podinfo + namespace: test +spec: + provider: istio + progressDeadlineSeconds: 600 + targetRef: + apiVersion: apps/v1 + kind: Deployment + name: podinfo + autoscalerRef: + apiVersion: autoscaling/v2beta1 + kind: HorizontalPodAutoscaler + name: podinfo + service: + name: podinfo + port: 80 + targetPort: 9898 + portName: http + portDiscovery: true + gateways: + - public-gateway.istio-system.svc.cluster.local + - mesh + hosts: + - app.example.com + trafficPolicy: + tls: + mode: DISABLE + match: + - uri: + prefix: / + rewrite: + uri: / + timeout: 30s + skipAnalysis: false + analysis: + interval: 15s + threshold: 10 + maxWeight: 50 + stepWeight: 5 + metrics: + - name: request-success-rate + thresholdRange: + min: 99 + interval: 1m + - name: request-duration + thresholdRange: + max: 500 + interval: 30s + webhooks: + - name: conformance-test + type: pre-rollout + url: http://flagger-loadtester.test/ + timeout: 15s + metadata: + type: "bash" + cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token" + - name: load-test + type: rollout + url: http://flagger-loadtester.test/ + timeout: 5s + metadata: + type: cmd + cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/" diff --git a/artifacts/examples/linkerd-canary.yaml b/artifacts/examples/linkerd-canary.yaml new file mode 100644 index 000000000..63884b9f1 --- /dev/null +++ b/artifacts/examples/linkerd-canary.yaml @@ -0,0 +1,52 @@ +apiVersion: flagger.app/v1beta1 +kind: Canary +metadata: + name: podinfo + namespace: test +spec: + provider: linkerd + progressDeadlineSeconds: 600 + targetRef: + apiVersion: apps/v1 + kind: Deployment + name: podinfo + autoscalerRef: + apiVersion: autoscaling/v2beta1 + kind: HorizontalPodAutoscaler + name: podinfo + service: + name: podinfo + port: 80 + targetPort: 9898 + portName: http + portDiscovery: true + skipAnalysis: false + analysis: + interval: 15s + threshold: 10 + maxWeight: 50 + stepWeight: 5 + metrics: + - name: request-success-rate + thresholdRange: + min: 99 + interval: 1m + - name: request-duration + thresholdRange: + max: 500 + interval: 30s + webhooks: + - name: conformance-test + type: pre-rollout + url: http://flagger-loadtester.test/ + timeout: 15s + metadata: + type: "bash" + cmd: "curl -sd 'test' http://podinfo-canary.test/token | grep token" + - name: load-test + type: rollout + url: http://flagger-loadtester.test/ + timeout: 5s + metadata: + type: cmd + cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test/" diff --git a/artifacts/gke/istio-gateway.yaml b/artifacts/gke/istio-gateway.yaml deleted file mode 100644 index 79c016156..000000000 --- a/artifacts/gke/istio-gateway.yaml +++ /dev/null @@ -1,27 +0,0 @@ -apiVersion: networking.istio.io/v1alpha3 -kind: Gateway -metadata: - name: public-gateway - namespace: istio-system -spec: - selector: - istio: ingressgateway - servers: - - port: - number: 80 - name: http - protocol: HTTP - hosts: - - "*" - tls: - httpsRedirect: true - - port: - number: 443 - name: https - protocol: HTTPS - hosts: - - "*" - tls: - mode: SIMPLE - privateKey: /etc/istio/ingressgateway-certs/tls.key - serverCertificate: /etc/istio/ingressgateway-certs/tls.crt diff --git a/artifacts/gke/istio-prometheus.yaml b/artifacts/gke/istio-prometheus.yaml deleted file mode 100644 index 07944d6e4..000000000 --- a/artifacts/gke/istio-prometheus.yaml +++ /dev/null @@ -1,834 +0,0 @@ -# Source: istio/charts/prometheus/templates/configmap.yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: prometheus - namespace: istio-system - labels: - app: prometheus - chart: prometheus-1.0.6 - heritage: Tiller - release: istio -data: - prometheus.yml: |- - global: - scrape_interval: 15s - scrape_configs: - - - job_name: 'istio-mesh' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-telemetry;prometheus - - - # Scrape config for envoy stats - - job_name: 'envoy-stats' - metrics_path: /stats/prometheus - kubernetes_sd_configs: - - role: pod - - relabel_configs: - - source_labels: [__meta_kubernetes_pod_container_port_name] - action: keep - regex: '.*-envoy-prom' - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:15090 - target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: namespace - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: pod_name - - metric_relabel_configs: - # Exclude some of the envoy metrics that have massive cardinality - # This list may need to be pruned further moving forward, as informed - # by performance and scalability testing. - - source_labels: [ cluster_name ] - regex: '(outbound|inbound|prometheus_stats).*' - action: drop - - source_labels: [ tcp_prefix ] - regex: '(outbound|inbound|prometheus_stats).*' - action: drop - - source_labels: [ listener_address ] - regex: '(.+)' - action: drop - - source_labels: [ http_conn_manager_listener_prefix ] - regex: '(.+)' - action: drop - - source_labels: [ http_conn_manager_prefix ] - regex: '(.+)' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_tls.*' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_tcp_downstream.*' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_http_(stats|admin).*' - action: drop - - source_labels: [ __name__ ] - regex: 'envoy_cluster_(lb|retry|bind|internal|max|original).*' - action: drop - - - - job_name: 'istio-policy' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-policy;http-monitoring - - - job_name: 'istio-telemetry' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-telemetry;http-monitoring - - - job_name: 'pilot' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-pilot;http-monitoring - - - job_name: 'galley' - # Override the global default and scrape targets from this job every 5 seconds. - scrape_interval: 5s - # metrics_path defaults to '/metrics' - # scheme defaults to 'http'. - - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - istio-system - - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: istio-galley;http-monitoring - - # scrape config for API servers - - job_name: 'kubernetes-apiservers' - kubernetes_sd_configs: - - role: endpoints - namespaces: - names: - - default - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - relabel_configs: - - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] - action: keep - regex: kubernetes;https - - # scrape config for nodes (kubelet) - - job_name: 'kubernetes-nodes' - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics - - # Scrape config for Kubelet cAdvisor. - # - # This is required for Kubernetes 1.7.3 and later, where cAdvisor metrics - # (those whose names begin with 'container_') have been removed from the - # Kubelet metrics endpoint. This job scrapes the cAdvisor endpoint to - # retrieve those metrics. - # - # In Kubernetes 1.7.0-1.7.2, these metrics are only exposed on the cAdvisor - # HTTP endpoint; use "replacement: /api/v1/nodes/${1}:4194/proxy/metrics" - # in that case (and ensure cAdvisor's HTTP server hasn't been disabled with - # the --cadvisor-port=0 Kubelet flag). - # - # This job is not necessary and should be removed in Kubernetes 1.6 and - # earlier versions, or it will cause the metrics to be scraped twice. - - job_name: 'kubernetes-cadvisor' - scheme: https - tls_config: - ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt - bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token - kubernetes_sd_configs: - - role: node - relabel_configs: - - action: labelmap - regex: __meta_kubernetes_node_label_(.+) - - target_label: __address__ - replacement: kubernetes.default.svc:443 - - source_labels: [__meta_kubernetes_node_name] - regex: (.+) - target_label: __metrics_path__ - replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor - - # scrape config for service endpoints. - - job_name: 'kubernetes-service-endpoints' - kubernetes_sd_configs: - - role: endpoints - relabel_configs: - - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] - action: keep - regex: true - - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] - action: replace - target_label: __scheme__ - regex: (https?) - - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] - action: replace - target_label: __metrics_path__ - regex: (.+) - - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] - action: replace - target_label: __address__ - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:$2 - - action: labelmap - regex: __meta_kubernetes_service_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: kubernetes_namespace - - source_labels: [__meta_kubernetes_service_name] - action: replace - target_label: kubernetes_name - - - job_name: 'kubernetes-pods' - kubernetes_sd_configs: - - role: pod - relabel_configs: # If first two labels are present, pod should be scraped by the istio-secure job. - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] - action: keep - regex: true - - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status] - action: drop - regex: (.+) - - source_labels: [__meta_kubernetes_pod_annotation_istio_mtls] - action: drop - regex: (true) - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] - action: replace - target_label: __metrics_path__ - regex: (.+) - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:$2 - target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: namespace - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: pod_name - - - job_name: 'kubernetes-pods-istio-secure' - scheme: https - tls_config: - ca_file: /etc/istio-certs/root-cert.pem - cert_file: /etc/istio-certs/cert-chain.pem - key_file: /etc/istio-certs/key.pem - insecure_skip_verify: true # prometheus does not support secure naming. - kubernetes_sd_configs: - - role: pod - relabel_configs: - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] - action: keep - regex: true - # sidecar status annotation is added by sidecar injector and - # istio_workload_mtls_ability can be specifically placed on a pod to indicate its ability to receive mtls traffic. - - source_labels: [__meta_kubernetes_pod_annotation_sidecar_istio_io_status, __meta_kubernetes_pod_annotation_istio_mtls] - action: keep - regex: (([^;]+);([^;]*))|(([^;]*);(true)) - - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] - action: replace - target_label: __metrics_path__ - regex: (.+) - - source_labels: [__address__] # Only keep address that is host:port - action: keep # otherwise an extra target with ':443' is added for https scheme - regex: ([^:]+):(\d+) - - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] - action: replace - regex: ([^:]+)(?::\d+)?;(\d+) - replacement: $1:$2 - target_label: __address__ - - action: labelmap - regex: __meta_kubernetes_pod_label_(.+) - - source_labels: [__meta_kubernetes_namespace] - action: replace - target_label: namespace - - source_labels: [__meta_kubernetes_pod_name] - action: replace - target_label: pod_name - ---- - -# Source: istio/charts/prometheus/templates/clusterrole.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRole -metadata: - name: prometheus-istio-system - labels: - app: prometheus - chart: prometheus-1.0.6 - heritage: Tiller - release: istio -rules: - - apiGroups: [""] - resources: - - nodes - - services - - endpoints - - pods - - nodes/proxy - verbs: ["get", "list", "watch"] - - apiGroups: [""] - resources: - - configmaps - verbs: ["get"] - - nonResourceURLs: ["/metrics"] - verbs: ["get"] - ---- - -# Source: istio/charts/prometheus/templates/serviceaccount.yaml -apiVersion: v1 -kind: ServiceAccount -metadata: - name: prometheus - namespace: istio-system - labels: - app: prometheus - chart: prometheus-1.0.6 - heritage: Tiller - release: istio - ---- - -# Source: istio/charts/prometheus/templates/clusterrolebindings.yaml -apiVersion: rbac.authorization.k8s.io/v1beta1 -kind: ClusterRoleBinding -metadata: - name: prometheus-istio-system - labels: - app: prometheus - chart: prometheus-1.0.6 - heritage: Tiller - release: istio -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: prometheus-istio-system -subjects: - - kind: ServiceAccount - name: prometheus - namespace: istio-system - ---- - -# Source: istio/charts/prometheus/templates/service.yaml -apiVersion: v1 -kind: Service -metadata: - name: prometheus - namespace: istio-system - annotations: - prometheus.io/scrape: 'true' - labels: - name: prometheus -spec: - selector: - app: prometheus - ports: - - name: http-prometheus - protocol: TCP - port: 9090 - ---- - -# Source: istio/charts/prometheus/templates/deployment.yaml -apiVersion: apps/v1beta1 -kind: Deployment -metadata: - name: prometheus - namespace: istio-system - labels: - app: prometheus - chart: prometheus-1.0.6 - heritage: Tiller - release: istio -spec: - replicas: 1 - selector: - matchLabels: - app: prometheus - template: - metadata: - labels: - app: prometheus - annotations: - sidecar.istio.io/inject: "false" - scheduler.alpha.kubernetes.io/critical-pod: "" - spec: - serviceAccountName: prometheus - containers: - - name: prometheus - image: "docker.io/prom/prometheus:v2.3.1" - imagePullPolicy: IfNotPresent - args: - - '--storage.tsdb.retention=6h' - - '--config.file=/etc/prometheus/prometheus.yml' - ports: - - containerPort: 9090 - name: http - livenessProbe: - httpGet: - path: /-/healthy - port: 9090 - readinessProbe: - httpGet: - path: /-/ready - port: 9090 - resources: - requests: - cpu: 10m - - volumeMounts: - - name: config-volume - mountPath: /etc/prometheus - - mountPath: /etc/istio-certs - name: istio-certs - volumes: - - name: config-volume - configMap: - name: prometheus - - name: istio-certs - secret: - defaultMode: 420 - optional: true - secretName: istio.default - affinity: - nodeAffinity: - requiredDuringSchedulingIgnoredDuringExecution: - nodeSelectorTerms: - - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - ppc64le - - s390x - preferredDuringSchedulingIgnoredDuringExecution: - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - amd64 - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - ppc64le - - weight: 2 - preference: - matchExpressions: - - key: beta.kubernetes.io/arch - operator: In - values: - - s390x - ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: requestcount - namespace: istio-system -spec: - value: "1" - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: requestduration - namespace: istio-system -spec: - value: response.duration | "0ms" - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: requestsize - namespace: istio-system -spec: - value: request.size | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: responsesize - namespace: istio-system -spec: - value: response.size | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.host | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - request_protocol: api.protocol | context.protocol | "unknown" - response_code: response.code | 200 - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: tcpbytesent - namespace: istio-system -spec: - value: connection.sent.bytes | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.name | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: metric -metadata: - name: tcpbytereceived - namespace: istio-system -spec: - value: connection.received.bytes | 0 - dimensions: - reporter: conditional((context.reporter.kind | "inbound") == "outbound", "source", "destination") - source_workload: source.workload.name | "unknown" - source_workload_namespace: source.workload.namespace | "unknown" - source_principal: source.principal | "unknown" - source_app: source.labels["app"] | "unknown" - source_version: source.labels["version"] | "unknown" - destination_workload: destination.workload.name | "unknown" - destination_workload_namespace: destination.workload.namespace | "unknown" - destination_principal: destination.principal | "unknown" - destination_app: destination.labels["app"] | "unknown" - destination_version: destination.labels["version"] | "unknown" - destination_service: destination.service.name | "unknown" - destination_service_name: destination.service.name | "unknown" - destination_service_namespace: destination.service.namespace | "unknown" - connection_security_policy: conditional((context.reporter.kind | "inbound") == "outbound", "unknown", conditional(connection.mtls | false, "mutual_tls", "none")) - monitored_resource_type: '"UNSPECIFIED"' ---- -apiVersion: "config.istio.io/v1alpha2" -kind: prometheus -metadata: - name: handler - namespace: istio-system -spec: - metrics: - - name: requests_total - instance_name: requestcount.metric.istio-system - kind: COUNTER - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - - name: request_duration_seconds - instance_name: requestduration.metric.istio-system - kind: DISTRIBUTION - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - buckets: - explicit_buckets: - bounds: [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] - - name: request_bytes - instance_name: requestsize.metric.istio-system - kind: DISTRIBUTION - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - buckets: - exponentialBuckets: - numFiniteBuckets: 8 - scale: 1 - growthFactor: 10 - - name: response_bytes - instance_name: responsesize.metric.istio-system - kind: DISTRIBUTION - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - request_protocol - - response_code - - connection_security_policy - buckets: - exponentialBuckets: - numFiniteBuckets: 8 - scale: 1 - growthFactor: 10 - - name: tcp_sent_bytes_total - instance_name: tcpbytesent.metric.istio-system - kind: COUNTER - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - connection_security_policy - - name: tcp_received_bytes_total - instance_name: tcpbytereceived.metric.istio-system - kind: COUNTER - label_names: - - reporter - - source_app - - source_principal - - source_workload - - source_workload_namespace - - source_version - - destination_app - - destination_principal - - destination_workload - - destination_workload_namespace - - destination_version - - destination_service - - destination_service_name - - destination_service_namespace - - connection_security_policy ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: promhttp - namespace: istio-system -spec: - match: context.protocol == "http" || context.protocol == "grpc" - actions: - - handler: handler.prometheus - instances: - - requestcount.metric - - requestduration.metric - - requestsize.metric - - responsesize.metric ---- -apiVersion: "config.istio.io/v1alpha2" -kind: rule -metadata: - name: promtcp - namespace: istio-system -spec: - match: context.protocol == "tcp" - actions: - - handler: handler.prometheus - instances: - - tcpbytesent.metric - - tcpbytereceived.metric ---- diff --git a/artifacts/gloo/canary.yaml b/artifacts/gloo/canary.yaml deleted file mode 100644 index 2412f6ae7..000000000 --- a/artifacts/gloo/canary.yaml +++ /dev/null @@ -1,52 +0,0 @@ -apiVersion: flagger.app/v1alpha3 -kind: Canary -metadata: - name: podinfo - namespace: test -spec: - provider: gloo - targetRef: - apiVersion: apps/v1 - kind: Deployment - name: podinfo - progressDeadlineSeconds: 60 - autoscalerRef: - apiVersion: autoscaling/v2beta1 - kind: HorizontalPodAutoscaler - name: podinfo - service: - port: 9898 - canaryAnalysis: - interval: 10s - threshold: 10 - maxWeight: 50 - stepWeight: 5 - metrics: - - name: request-success-rate - threshold: 99 - interval: 1m - - name: request-duration - threshold: 500 - interval: 30s - webhooks: - - name: acceptance-test - type: pre-rollout - url: http://flagger-loadtester.test/ - timeout: 10s - metadata: - type: bash - cmd: "curl -sd 'test' http://podinfo-canary:9898/token | grep token" - - name: gloo-acceptance-test - type: pre-rollout - url: http://flagger-loadtester.test/ - timeout: 10s - metadata: - type: bash - cmd: "curl -sd 'test' -H 'Host: app.example.com' http://gateway-proxy-v2.gloo-system/token | grep token" - - name: load-test - url: http://flagger-loadtester.test/ - timeout: 5s - metadata: - type: cmd - cmd: "hey -z 2m -q 5 -c 2 -host app.example.com http://gateway-proxy-v2.gloo-system" - logCmdOutput: "true" diff --git a/artifacts/gloo/virtual-service.yaml b/artifacts/gloo/virtual-service.yaml deleted file mode 100644 index 169e561a6..000000000 --- a/artifacts/gloo/virtual-service.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: gateway.solo.io/v1 -kind: VirtualService -metadata: - name: podinfo - namespace: test -spec: - virtualHost: - domains: - - '*' - name: podinfo - routes: - - matcher: - prefix: / - routeAction: - upstreamGroup: - name: podinfo - namespace: test diff --git a/artifacts/helmtester/deployment.yaml b/artifacts/helmtester/deployment.yaml deleted file mode 100644 index cc50ff369..000000000 --- a/artifacts/helmtester/deployment.yaml +++ /dev/null @@ -1,58 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: flagger-helmtester - namespace: kube-system - labels: - app: flagger-helmtester -spec: - selector: - matchLabels: - app: flagger-helmtester - template: - metadata: - labels: - app: flagger-helmtester - annotations: - prometheus.io/scrape: "true" - spec: - serviceAccountName: tiller - containers: - - name: helmtester - image: weaveworks/flagger-loadtester:0.8.0 - imagePullPolicy: IfNotPresent - ports: - - name: http - containerPort: 8080 - command: - - ./loadtester - - -port=8080 - - -log-level=info - - -timeout=1h - livenessProbe: - exec: - command: - - wget - - --quiet - - --tries=1 - - --timeout=4 - - --spider - - http://localhost:8080/healthz - timeoutSeconds: 5 - readinessProbe: - exec: - command: - - wget - - --quiet - - --tries=1 - - --timeout=4 - - --spider - - http://localhost:8080/healthz - timeoutSeconds: 5 - resources: - limits: - memory: "512Mi" - cpu: "1000m" - requests: - memory: "32Mi" - cpu: "10m" diff --git a/artifacts/helmtester/service.yaml b/artifacts/helmtester/service.yaml deleted file mode 100644 index 61d8c2286..000000000 --- a/artifacts/helmtester/service.yaml +++ /dev/null @@ -1,16 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: flagger-helmtester - namespace: kube-system - labels: - app: flagger-helmtester -spec: - type: ClusterIP - selector: - app: flagger-helmtester - ports: - - name: http - port: 80 - protocol: TCP - targetPort: http \ No newline at end of file diff --git a/artifacts/loadtester/config.yaml b/artifacts/loadtester/config.yaml deleted file mode 100644 index b9d0f5685..000000000 --- a/artifacts/loadtester/config.yaml +++ /dev/null @@ -1,19 +0,0 @@ ---- -apiVersion: v1 -kind: ConfigMap -metadata: - name: flagger-loadtester-bats -data: - tests: | - #!/usr/bin/env bats - - @test "check message" { - curl -sS http://${URL} | jq -r .message | { - run cut -d $' ' -f1 - [ $output = "greetings" ] - } - } - - @test "check headers" { - curl -sS http://${URL}/headers | grep X-Request-Id - } diff --git a/artifacts/loadtester/deployment.yaml b/artifacts/loadtester/deployment.yaml deleted file mode 100644 index 09862656a..000000000 --- a/artifacts/loadtester/deployment.yaml +++ /dev/null @@ -1,67 +0,0 @@ -apiVersion: apps/v1 -kind: Deployment -metadata: - name: flagger-loadtester - labels: - app: flagger-loadtester -spec: - selector: - matchLabels: - app: flagger-loadtester - template: - metadata: - labels: - app: flagger-loadtester - annotations: - prometheus.io/scrape: "true" - spec: - containers: - - name: loadtester - image: weaveworks/flagger-loadtester:0.13.0 - imagePullPolicy: IfNotPresent - ports: - - name: http - containerPort: 8080 - command: - - ./loadtester - - -port=8080 - - -log-level=info - - -timeout=1h - livenessProbe: - exec: - command: - - wget - - --quiet - - --tries=1 - - --timeout=4 - - --spider - - http://localhost:8080/healthz - timeoutSeconds: 5 - readinessProbe: - exec: - command: - - wget - - --quiet - - --tries=1 - - --timeout=4 - - --spider - - http://localhost:8080/healthz - timeoutSeconds: 5 - resources: - limits: - memory: "512Mi" - cpu: "1000m" - requests: - memory: "32Mi" - cpu: "10m" - securityContext: - readOnlyRootFilesystem: true - runAsUser: 10001 -# volumeMounts: -# - name: tests -# mountPath: /bats -# readOnly: true -# volumes: -# - name: tests -# configMap: -# name: flagger-loadtester-bats \ No newline at end of file diff --git a/artifacts/loadtester/service.yaml b/artifacts/loadtester/service.yaml deleted file mode 100644 index 772b20afe..000000000 --- a/artifacts/loadtester/service.yaml +++ /dev/null @@ -1,15 +0,0 @@ -apiVersion: v1 -kind: Service -metadata: - name: flagger-loadtester - labels: - app: flagger-loadtester -spec: - type: ClusterIP - selector: - app: flagger-loadtester - ports: - - name: http - port: 80 - protocol: TCP - targetPort: http \ No newline at end of file diff --git a/artifacts/namespaces/test.yaml b/artifacts/namespaces/test.yaml deleted file mode 100644 index cff2ab629..000000000 --- a/artifacts/namespaces/test.yaml +++ /dev/null @@ -1,7 +0,0 @@ -apiVersion: v1 -kind: Namespace -metadata: - name: test - labels: - istio-injection: enabled - appmesh.k8s.aws/sidecarInjectorWebhook: enabled diff --git a/artifacts/nginx/canary.yaml b/artifacts/nginx/canary.yaml deleted file mode 100644 index acb9afbc5..000000000 --- a/artifacts/nginx/canary.yaml +++ /dev/null @@ -1,70 +0,0 @@ -apiVersion: flagger.app/v1alpha3 -kind: Canary -metadata: - name: podinfo - namespace: test -spec: - # deployment reference - targetRef: - apiVersion: apps/v1 - kind: Deployment - name: podinfo - # ingress reference - ingressRef: - apiVersion: extensions/v1beta1 - kind: Ingress - name: podinfo - # HPA reference (optional) - autoscalerRef: - apiVersion: autoscaling/v2beta1 - kind: HorizontalPodAutoscaler - name: podinfo - # the maximum time in seconds for the canary deployment - # to make progress before it is rollback (default 600s) - progressDeadlineSeconds: 60 - service: - # ClusterIP port number - port: 80 - # container port number or name - targetPort: 9898 - canaryAnalysis: - # schedule interval (default 60s) - interval: 10s - # max number of failed metric checks before rollback - threshold: 10 - # max traffic percentage routed to canary - # percentage (0-100) - maxWeight: 50 - # canary increment step - # percentage (0-100) - stepWeight: 5 - # NGINX Prometheus checks - metrics: - - name: request-success-rate - # minimum req success rate (non 5xx responses) - # percentage (0-100) - threshold: 99 - interval: 1m - - name: "latency" - threshold: 0.5 - interval: 1m - query: | - histogram_quantile(0.99, - sum( - rate( - http_request_duration_seconds_bucket{ - kubernetes_namespace="test", - kubernetes_pod_name=~"podinfo-[0-9a-zA-Z]+(-[0-9a-zA-Z]+)" - }[1m] - ) - ) by (le) - ) - # external checks (optional) - webhooks: - - name: load-test - url: http://flagger-loadtester.test/ - timeout: 5s - metadata: - type: cmd - cmd: "hey -z 1m -q 10 -c 2 http://app.example.com/" - logCmdOutput: "true" diff --git a/artifacts/nginx/ingress.yaml b/artifacts/nginx/ingress.yaml deleted file mode 100644 index c5a6fa62d..000000000 --- a/artifacts/nginx/ingress.yaml +++ /dev/null @@ -1,17 +0,0 @@ -apiVersion: extensions/v1beta1 -kind: Ingress -metadata: - name: podinfo - namespace: test - labels: - app: podinfo - annotations: - kubernetes.io/ingress.class: "nginx" -spec: - rules: - - host: app.example.com - http: - paths: - - backend: - serviceName: podinfo - servicePort: 9898 From 6d4db45d6cb6d962e2fa430291a79b7442625ca5 Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 13:37:13 +0200 Subject: [PATCH 06/10] build: update Go to v1.14 and Alpine to v3.11 --- .circleci/config.yml | 8 ++++---- Dockerfile | 2 +- go.mod | 2 +- 3 files changed, 6 insertions(+), 6 deletions(-) diff --git a/.circleci/config.yml b/.circleci/config.yml index deebb06b0..cb676c7dd 100644 --- a/.circleci/config.yml +++ b/.circleci/config.yml @@ -3,7 +3,7 @@ jobs: build-binary: docker: - - image: circleci/golang:1.13 + - image: circleci/golang:1.14 working_directory: ~/build steps: - checkout @@ -47,7 +47,7 @@ jobs: push-container: docker: - - image: circleci/golang:1.13 + - image: circleci/golang:1.14 steps: - checkout - setup_remote_docker: @@ -59,7 +59,7 @@ jobs: push-binary: docker: - - image: circleci/golang:1.13 + - image: circleci/golang:1.14 working_directory: ~/build steps: - checkout @@ -175,7 +175,7 @@ jobs: push-helm-charts: docker: - - image: circleci/golang:1.13 + - image: circleci/golang:1.14 steps: - checkout - run: diff --git a/Dockerfile b/Dockerfile index 85af4aa80..8f693b771 100644 --- a/Dockerfile +++ b/Dockerfile @@ -1,4 +1,4 @@ -FROM alpine:3.10 +FROM alpine:3.11 RUN addgroup -S flagger \ && adduser -S -g flagger flagger \ diff --git a/go.mod b/go.mod index 514a1f615..75f58ddb2 100644 --- a/go.mod +++ b/go.mod @@ -1,6 +1,6 @@ module github.com/weaveworks/flagger -go 1.13 +go 1.14 require ( github.com/Masterminds/semver/v3 v3.0.3 From a0a9b7d29aaade90029f6e7d92140ea19d58f579 Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 13:51:12 +0200 Subject: [PATCH 07/10] e2e: use kustomize to install the load tester --- artifacts/flagger/account.yaml | 4 ++-- artifacts/flagger/deployment.yaml | 5 +---- docs/gitbook/usage/webhooks.md | 5 +---- test/e2e-contour-tests.sh | 2 +- test/e2e-gloo-tests.sh | 2 +- test/e2e-linkerd-tests.sh | 2 +- test/e2e-nginx-tests.sh | 2 +- 7 files changed, 8 insertions(+), 14 deletions(-) diff --git a/artifacts/flagger/account.yaml b/artifacts/flagger/account.yaml index c71e6108e..173eb5e49 100644 --- a/artifacts/flagger/account.yaml +++ b/artifacts/flagger/account.yaml @@ -2,7 +2,7 @@ apiVersion: v1 kind: ServiceAccount metadata: name: flagger - namespace: istio-system + namespace: default labels: app: flagger --- @@ -109,4 +109,4 @@ roleRef: subjects: - kind: ServiceAccount name: flagger - namespace: istio-system + namespace: default diff --git a/artifacts/flagger/deployment.yaml b/artifacts/flagger/deployment.yaml index 4a04cac48..55c2b8b0f 100644 --- a/artifacts/flagger/deployment.yaml +++ b/artifacts/flagger/deployment.yaml @@ -2,7 +2,7 @@ apiVersion: apps/v1 kind: Deployment metadata: name: flagger - namespace: istio-system + namespace: default labels: app: flagger spec: @@ -30,9 +30,6 @@ spec: command: - ./flagger - -log-level=info - - -control-loop-interval=10s - - -mesh-provider=$(MESH_PROVIDER) - - -metrics-server=http://prometheus.istio-system.svc.cluster.local:9090 livenessProbe: exec: command: diff --git a/docs/gitbook/usage/webhooks.md b/docs/gitbook/usage/webhooks.md index 16056db51..6f77e8c88 100644 --- a/docs/gitbook/usage/webhooks.md +++ b/docs/gitbook/usage/webhooks.md @@ -116,10 +116,7 @@ that generates traffic during analysis when configured as a webhook. First you need to deploy the load test runner in a namespace with sidecar injection enabled: ```bash -export REPO=https://raw.githubusercontent.com/weaveworks/flagger/master - -kubectl -n test apply -f ${REPO}/artifacts/loadtester/deployment.yaml -kubectl -n test apply -f ${REPO}/artifacts/loadtester/service.yaml +kubectl apply -k github.com/weaveworks/flagger//kustomize/tester ``` Or by using Helm: diff --git a/test/e2e-contour-tests.sh b/test/e2e-contour-tests.sh index 70d183726..90f9c84b1 100755 --- a/test/e2e-contour-tests.sh +++ b/test/e2e-contour-tests.sh @@ -11,7 +11,7 @@ echo '>>> Creating test namespace' kubectl create namespace test echo '>>> Installing load tester' -kubectl -n test apply -f ${REPO_ROOT}/artifacts/loadtester/ +kubectl apply -k ${REPO_ROOT}/kustomize/tester kubectl -n test rollout status deployment/flagger-loadtester echo '>>> Initialising canary' diff --git a/test/e2e-gloo-tests.sh b/test/e2e-gloo-tests.sh index f4525c722..75bad44dd 100755 --- a/test/e2e-gloo-tests.sh +++ b/test/e2e-gloo-tests.sh @@ -11,7 +11,7 @@ echo '>>> Creating test namespace' kubectl create namespace test echo '>>> Installing load tester' -kubectl -n test apply -f ${REPO_ROOT}/artifacts/loadtester/ +kubectl apply -k ${REPO_ROOT}/kustomize/tester kubectl -n test rollout status deployment/flagger-loadtester echo '>>> Initialising canary' diff --git a/test/e2e-linkerd-tests.sh b/test/e2e-linkerd-tests.sh index 519f9f58a..c93b3d730 100755 --- a/test/e2e-linkerd-tests.sh +++ b/test/e2e-linkerd-tests.sh @@ -11,7 +11,7 @@ kubectl create namespace test kubectl annotate namespace test linkerd.io/inject=enabled echo '>>> Installing the load tester' -kubectl -n test apply -f ${REPO_ROOT}/artifacts/loadtester/ +kubectl apply -k ${REPO_ROOT}/kustomize/tester kubectl -n test rollout status deployment/flagger-loadtester echo '>>> Initialising canary' diff --git a/test/e2e-nginx-tests.sh b/test/e2e-nginx-tests.sh index 3cf04fe97..66a33e458 100755 --- a/test/e2e-nginx-tests.sh +++ b/test/e2e-nginx-tests.sh @@ -11,7 +11,7 @@ echo '>>> Creating test namespace' kubectl create namespace test echo '>>> Installing load tester' -kubectl -n test apply -f ${REPO_ROOT}/artifacts/loadtester/ +kubectl apply -k ${REPO_ROOT}/kustomize/tester kubectl -n test rollout status deployment/flagger-loadtester echo '>>> Initialising canary' From f164eac58edf4c1403d994f67e54cc9bb657cdf6 Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Tue, 3 Mar 2020 14:14:11 +0200 Subject: [PATCH 08/10] docs: add API changes section to dev guide --- docs/gitbook/dev/dev-guide.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/docs/gitbook/dev/dev-guide.md b/docs/gitbook/dev/dev-guide.md index 9e6e557b6..35f6337f4 100644 --- a/docs/gitbook/dev/dev-guide.md +++ b/docs/gitbook/dev/dev-guide.md @@ -98,6 +98,22 @@ Run unit tests: make test ``` +### API changes + +If you made changes to `pkg/apis` regenerate the Kubernetes client sets with: + +```bash +make codegen +``` + +Update the validation spec in `artifacts/flagger/crd.yaml` and run: + +```bash +make crd +``` + +Note that any change to the CRDs must be accompanied by an update to the Open API schema. + ### Manual testing Install a service mesh and/or an ingress controller on your cluster and deploy Flagger From 8d9dde2dc77ba65456ed8746c60f368d8afc70ae Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Wed, 4 Mar 2020 11:14:00 +0200 Subject: [PATCH 09/10] docs: update Flux tutorial to latest version --- docs/gitbook/tutorials/canary-helm-gitops.md | 43 ++++++++++++-------- docs/gitbook/usage/webhooks.md | 10 ++--- 2 files changed, 31 insertions(+), 22 deletions(-) diff --git a/docs/gitbook/tutorials/canary-helm-gitops.md b/docs/gitbook/tutorials/canary-helm-gitops.md index 69d164b33..e028ca380 100644 --- a/docs/gitbook/tutorials/canary-helm-gitops.md +++ b/docs/gitbook/tutorials/canary-helm-gitops.md @@ -252,8 +252,6 @@ Create a git repository with the following content: └── helmtester.yaml ``` -You can find the git source [here](https://github.com/stefanprodan/flagger/tree/master/artifacts/cluster). - Define the `frontend` release using Flux `HelmRelease` custom resource: ```yaml @@ -263,8 +261,8 @@ metadata: name: frontend namespace: test annotations: - flux.weave.works/automated: "true" - flux.weave.works/tag.chart-image: semver:~3.1 + fluxcd.io/automated: "true" + filter.fluxcd.io/chart-image: semver:~3.1 spec: releaseName: frontend chart: @@ -288,21 +286,27 @@ spec: enabled: true ``` -In the `chart` section I've defined the release source by specifying the Helm repository \(hosted on GitHub Pages\), chart name and version. In the `values` section I've overwritten the defaults set in values.yaml. +In the `chart` section I've defined the release source by specifying the Helm repository (hosted on GitHub Pages), +chart name and version. In the `values` section I've overwritten the defaults set in values.yaml. -With the `flux.weave.works` annotations I instruct Flux to automate this release. When an image tag in the sem ver range of `3.1.0 - 3.1.99` is pushed to Docker Hub, Flux will upgrade the Helm release and from there Flagger will pick up the change and start a canary deployment. +With the `fluxcd.io` annotations I instruct Flux to automate this release. +When an image tag in the sem ver range of `3.1.0 - 3.1.99` is pushed to Docker Hub, +Flux will upgrade the Helm release and from there Flagger will pick up the change and start a canary deployment. -Install [Weave Flux](https://github.com/weaveworks/flux) and its Helm Operator by specifying your Git repo URL: +Install [Flux](https://github.com/fluxcd/flux) and its +[Helm Operator](https://github.com/fluxcd/helm-operator) by specifying your Git repo URL: ```bash helm repo add fluxcd https://charts.fluxcd.io helm install --name flux \ ---set helmOperator.create=true \ ---set helmOperator.createCRD=true \ --set git.url=git@github.com:/ \ --namespace fluxcd \ fluxcd/flux + +helm upgrade -i helm-operator fluxcd/helm-operator \ +--namespace fluxcd \ +--set git.ssh.secretName=flux-git-deploy ``` At startup Flux generates a SSH key and logs the public key. Find the SSH public key with: @@ -311,11 +315,14 @@ At startup Flux generates a SSH key and logs the public key. Find the SSH public kubectl -n fluxcd logs deployment/flux | grep identity.pub | cut -d '"' -f2 ``` -In order to sync your cluster state with Git you need to copy the public key and create a deploy key with write access on your GitHub repository. +In order to sync your cluster state with Git you need to copy the public key +and create a deploy key with write access on your GitHub repository. -Open GitHub, navigate to your fork, go to _Setting > Deploy keys_ click on _Add deploy key_, check _Allow write access_, paste the Flux public key and click _Add key_. +Open GitHub, navigate to your fork, go to _Setting > Deploy keys_ click on _Add deploy key_, check _Allow write access_, +paste the Flux public key and click _Add key_. -After a couple of seconds Flux will apply the Kubernetes resources from Git and Flagger will launch the `frontend` and `backend` apps. +After a couple of seconds Flux will apply the Kubernetes resources from Git and +Flagger will launch the `frontend` and `backend` apps. A CI/CD pipeline for the `frontend` release could look like this: @@ -336,12 +343,14 @@ If the canary fails, fix the bug, do another patch release eg `3.1.2` and the wh A canary deployment can fail due to any of the following reasons: * the container image can't be downloaded -* the deployment replica set is stuck for more then ten minutes \(eg. due to a container crash loop\) -* the webooks \(acceptance tests, helm tests, load tests, etc\) are returning a non 2xx response -* the HTTP success rate \(non 5xx responses\) metric drops under the threshold +* the deployment replica set is stuck for more then ten minutes (eg. due to a container crash loop) +* the webooks (acceptance tests, helm tests, load tests, etc) are returning a non 2xx response +* the HTTP success rate (non 5xx responses) metric drops under the threshold * the HTTP average duration metric goes over the threshold * the Istio telemetry service is unable to collect traffic metrics -* the metrics server \(Prometheus\) can't be reached +* the metrics server (Prometheus) can't be reached -If you want to find out more about managing Helm releases with Flux here are two in-depth guides: [gitops-helm](https://github.com/stefanprodan/gitops-helm) and [gitops-istio](https://github.com/stefanprodan/gitops-istio). +If you want to find out more about managing Helm releases with Flux here are two in-depth guides: +[gitops-helm](https://github.com/stefanprodan/gitops-helm) +and [gitops-istio](https://github.com/stefanprodan/gitops-istio). diff --git a/docs/gitbook/usage/webhooks.md b/docs/gitbook/usage/webhooks.md index 6f77e8c88..288ed19b5 100644 --- a/docs/gitbook/usage/webhooks.md +++ b/docs/gitbook/usage/webhooks.md @@ -30,13 +30,13 @@ Spec: - name: "start gate" type: confirm-rollout url: http://flagger-loadtester.test/gate/approve - - name: "smoke test" + - name: "helm test" type: pre-rollout - url: http://flagger-helmtester.kube-system/ + url: http://flagger-helmtester.flagger/ timeout: 3m metadata: - type: "helm" - cmd: "test podinfo --cleanup" + type: "helmv3" + cmd: "test podinfo -n test" - name: "load test" type: rollout url: http://flagger-loadtester.test/ @@ -273,7 +273,7 @@ If you are using Helm v3, you'll have to create a dedicated service account and timeout: 3m metadata: type: "helmv3" - cmd: "test run {{ .Release.Name }} --timeout 3m -n {{ .Release.Namespace }}" + cmd: "test {{ .Release.Name }} --timeout 3m -n {{ .Release.Namespace }}" ``` As an alternative to Helm you can use the [Bash Automated Testing System](https://github.com/bats-core/bats-core) to run your tests. From cfd2ff92bfe4ddab4f7b81debb4d93af57c66517 Mon Sep 17 00:00:00 2001 From: stefanprodan Date: Wed, 4 Mar 2020 11:14:21 +0200 Subject: [PATCH 10/10] Add Ingress v2 to roadmap --- README.md | 1 + docs/gitbook/usage/how-it-works.md | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 241edc24c..4f2e12d02 100644 --- a/README.md +++ b/README.md @@ -192,6 +192,7 @@ For more details on how the canary analysis and promotion works please [read the ### Roadmap +* Add support for Kubernetes [Ingress v2](https://github.com/kubernetes-sigs/service-apis) * Integrate with other service mesh like Consul Connect and ingress controllers like HAProxy, ALB * Integrate with other metrics providers like InfluxDB, Stackdriver, SignalFX * Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two diff --git a/docs/gitbook/usage/how-it-works.md b/docs/gitbook/usage/how-it-works.md index 4b0cea479..ba1bd998d 100644 --- a/docs/gitbook/usage/how-it-works.md +++ b/docs/gitbook/usage/how-it-works.md @@ -3,7 +3,7 @@ [Flagger](https://github.com/weaveworks/flagger) can be configured to automate the release process for Kubernetes workloads with a custom resource named canary. -### Canary custom resource +### Canary resource The canary custom resource defines the release process of an application running on Kubernetes and is portable across clusters, service meshes and ingress providers.