Flagger
Search…
Open Service Mesh Deployments
This guide shows you how to use Open Service Mesh (OSM) and Flagger to automate canary deployments.
Flagger OSM Traffic Split

Prerequisites

Flagger requires a Kubernetes cluster v1.16 or newer and Open Service Mesh 0.9.1 or newer.
Install Open Service Mesh with Prometheus and permissive traffic policy enabled.
1
osm install \
2
--set=OpenServiceMesh.deployPrometheus=true \
3
--set=OpenServiceMesh.enablePermissiveTrafficPolicy=true
Copied!
Install Flagger in the osm-system namespace using kubectl.
1
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/osm?ref=main
Copied!
Alternatively, Flagger can be installed in the osm-system namespace using helm.
1
helm upgrade -i flagger flagger/flagger \
2
--namespace=osm-system \
3
--set meshProvider=osm \
4
--set metricsServer=http://osm-prometheus.osm-system.svc:7070
Copied!

Bootstrap

Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services and SMI traffic split). These objects expose the application inside the mesh and drive the canary analysis and promotion.
Create a test namespace and enable osm namespace monitoring and metrics scraping for the namespace.
1
kubectl create namespace test
2
osm namespace add test
3
osm metrics enable --namespace test
Copied!
Create a podinfo deployment and a horizontal pod autoscaler:
1
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/podinfo?ref=main
Copied!
Install the load testing service to generate traffic during the canary analysis:
1
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
Copied!
Create a canary custom resource for the podinfo deployment. The following podinfo canary custom resource instructs Flagger to: 1. monitor any changes to the podinfo deployment created earlier, 2. detect podinfo deployment revision changes, and 3. start a Flagger canary analysis, rollout, and promotion if there were deployment revision changes.
1
apiVersion: flagger.app/v1beta1
2
kind: Canary
3
metadata:
4
name: podinfo
5
namespace: test
6
spec:
7
provider: osm
8
# deployment reference
9
targetRef:
10
apiVersion: apps/v1
11
kind: Deployment
12
name: podinfo
13
# HPA reference (optional)
14
autoscalerRef:
15
apiVersion: autoscaling/v2beta2
16
kind: HorizontalPodAutoscaler
17
name: podinfo
18
# the maximum time in seconds for the canary deployment
19
# to make progress before it is rolled back (default 600s)
20
progressDeadlineSeconds: 60
21
service:
22
# ClusterIP port number
23
port: 9898
24
# container port number or name (optional)
25
targetPort: 9898
26
analysis:
27
# schedule interval (default 60s)
28
interval: 30s
29
# max number of failed metric checks before rollback
30
threshold: 5
31
# max traffic percentage routed to canary
32
# percentage (0-100)
33
maxWeight: 50
34
# canary increment step
35
# percentage (0-100)
36
stepWeight: 5
37
# OSM Prometheus checks
38
metrics:
39
- name: request-success-rate
40
# minimum req success rate (non 5xx responses)
41
# percentage (0-100)
42
thresholdRange:
43
min: 99
44
interval: 1m
45
- name: request-duration
46
# maximum req duration P99
47
# milliseconds
48
thresholdRange:
49
max: 500
50
interval: 30s
51
# testing (optional)
52
webhooks:
53
- name: acceptance-test
54
type: pre-rollout
55
url: http://flagger-loadtester.test/
56
timeout: 30s
57
metadata:
58
type: bash
59
cmd: "curl -sd 'test' http://podinfo-canary.test:9898/token | grep token"
60
- name: load-test
61
type: rollout
62
url: http://flagger-loadtester.test/
63
timeout: 5s
64
metadata:
65
cmd: "hey -z 2m -q 10 -c 2 http://podinfo-canary.test:9898/"
Copied!
Save the above resource as podinfo-canary.yaml and then apply it:
1
kubectl apply -f ./podinfo-canary.yaml
Copied!
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. The canary analysis will run for five minutes while validating the HTTP metrics and rollout hooks every half a minute.
After a couple of seconds Flagger will create the canary objects.
1
# applied
2
deployment.apps/podinfo
3
horizontalpodautoscaler.autoscaling/podinfo
4
ingresses.extensions/podinfo
5
canary.flagger.app/podinfo
6
7
# generated
8
deployment.apps/podinfo-primary
9
horizontalpodautoscaler.autoscaling/podinfo-primary
10
service/podinfo
11
service/podinfo-canary
12
service/podinfo-primary
13
trafficsplits.split.smi-spec.io/podinfo
Copied!
After the boostrap, the podinfo deployment will be scaled to zero and the traffic to podinfo.test will be routed to the primary pods. During the canary analysis, the podinfo-canary.test address can be used to target directly the canary pods.

Automated Canary Promotion

Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance indicators like HTTP requests success rate, requests average duration and pod health. Based on analysis of the KPIs a canary is promoted or aborted.
Flagger Canary Stages
Trigger a canary deployment by updating the container image:
1
kubectl -n test set image deployment/podinfo \
2
podinfod=stefanprodan/podinfo:3.1.1
Copied!
Flagger detects that the deployment revision changed and starts a new rollout.
1
kubectl -n test describe canary/podinfo
2
3
Status:
4
Canary Weight: 0
5
Failed Checks: 0
6
Phase: Succeeded
7
Events:
8
New revision detected! Scaling up podinfo.test
9
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
10
Pre-rollout check acceptance-test passed
11
Advance podinfo.test canary weight 5
12
Advance podinfo.test canary weight 10
13
Advance podinfo.test canary weight 15
14
Advance podinfo.test canary weight 20
15
Advance podinfo.test canary weight 25
16
Waiting for podinfo.test rollout to finish: 1 of 2 updated replicas are available
17
Advance podinfo.test canary weight 30
18
Advance podinfo.test canary weight 35
19
Advance podinfo.test canary weight 40
20
Advance podinfo.test canary weight 45
21
Advance podinfo.test canary weight 50
22
Copying podinfo.test template spec to podinfo-primary.test
23
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
24
Promotion completed! Scaling down podinfo.test
Copied!
Note that if you apply any new changes to the podinfo deployment during the canary analysis, Flagger will restart the analysis.
A canary deployment is triggered by changes in any of the following objects:
    Deployment PodSpec (container image, command, ports, env, resources, etc)
    ConfigMaps mounted as volumes or mapped to environment variables
    Secrets mounted as volumes or mapped to environment variables
You can monitor all canaries with:
1
watch kubectl get canaries --all-namespaces
2
3
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
4
test podinfo Progressing 15 2019-06-30T14:05:07Z
5
prod frontend Succeeded 0 2019-06-30T16:15:07Z
6
prod backend Failed 0 2019-06-30T17:05:07Z
Copied!

Automated Rollback

During the canary analysis you can generate HTTP 500 errors and high latency to test if Flagger pauses and rolls back the faulted version.
Trigger another canary deployment:
1
kubectl -n test set image deployment/podinfo \
2
podinfod=stefanprodan/podinfo:3.1.2
Copied!
Exec into the load tester pod with:
1
kubectl -n test exec -it flagger-loadtester-xx-xx sh
Copied!
Repeatedly generate HTTP 500 errors:
1
watch -n 1 curl http://podinfo-canary.test:9898/status/500
Copied!
Repeatedly generate latency:
1
watch -n 1 curl http://podinfo-canary.test:9898/delay/1
Copied!
When the number of failed checks reaches the canary analysis thresholds defined in the podinfo canary custom resource earlier, the traffic is routed back to the primary, the canary is scaled to zero and the rollout is marked as failed.
1
kubectl -n test describe canary/podinfo
2
3
Status:
4
Canary Weight: 0
5
Failed Checks: 10
6
Phase: Failed
7
Events:
8
Starting canary analysis for podinfo.test
9
Pre-rollout check acceptance-test passed
10
Advance podinfo.test canary weight 5
11
Advance podinfo.test canary weight 10
12
Advance podinfo.test canary weight 15
13
Halt podinfo.test advancement success rate 69.17% < 99%
14
Halt podinfo.test advancement success rate 61.39% < 99%
15
Halt podinfo.test advancement success rate 55.06% < 99%
16
Halt podinfo.test advancement request duration 1.20s > 0.5s
17
Halt podinfo.test advancement request duration 1.45s > 0.5s
18
Rolling back podinfo.test failed checks threshold reached 5
19
Canary failed! Scaling down podinfo.test
Copied!

Custom Metrics

The canary analysis can be extended with Prometheus queries.
Let's a define a check for 404 not found errors. Edit the canary analysis (podinfo-canary.yaml file) and add the following metric. For more information on creating additional custom metrics using OSM metrics, please check the metrics available in OSM.
1
analysis:
2
metrics:
3
- name: "404s percentage"
4
threshold: 3
5
query: |
6
100 - (
7
sum(
8
rate(
9
osm_request_total{
10
destination_namespace="test",
11
destination_kind="Deployment",
12
destination_name="podinfo",
13
response_code!="404"
14
}[1m]
15
)
16
)
17
/
18
sum(
19
rate(
20
osm_request_total{
21
destination_namespace="test",
22
destination_kind="Deployment",
23
destination_name="podinfo"
24
}[1m]
25
)
26
) * 100
27
)
Copied!
The above configuration validates the canary version by checking if the HTTP 404 req/sec percentage is below three percent of the total traffic. If the 404s rate reaches the 3% threshold, then the analysis is aborted and the canary is marked as failed.
Trigger a canary deployment by updating the container image:
1
kubectl -n test set image deployment/podinfo \
2
podinfod=stefanprodan/podinfo:3.1.3
Copied!
Exec into the load tester pod with:
1
kubectl -n test exec -it flagger-loadtester-xx-xx sh
Copied!
Repeatedly generate 404s:
1
watch -n 1 curl http://podinfo-canary.test:9898/status/404
Copied!
Watch Flagger logs to confirm successful canary rollback.
1
kubectl -n osm-system logs deployment/flagger -f | jq .msg
2
3
Starting canary deployment for podinfo.test
4
Pre-rollout check acceptance-test passed
5
Advance podinfo.test canary weight 5
6
Halt podinfo.test advancement 404s percentage 6.20 > 3
7
Halt podinfo.test advancement 404s percentage 6.45 > 3
8
Halt podinfo.test advancement 404s percentage 7.22 > 3
9
Halt podinfo.test advancement 404s percentage 6.50 > 3
10
Halt podinfo.test advancement 404s percentage 6.34 > 3
11
Rolling back podinfo.test failed checks threshold reached 5
12
Canary failed! Scaling down podinfo.test
Copied!
Last modified 1mo ago