Flagger
Search…
Webhooks
The canary analysis can be extended with webhooks. Flagger will call each webhook URL and determine from the response status code (HTTP 2xx) if the canary is failing or not.
There are several types of hooks:
    confirm-rollout hooks are executed before scaling up the canary deployment and can be used for manual approval. The rollout is paused until the hook returns a successful HTTP status code.
    pre-rollout hooks are executed before routing traffic to canary. The canary advancement is paused if a pre-rollout hook fails and if the number of failures reach the threshold the canary will be rollback.
    rollout hooks are executed during the analysis on each iteration before the metric checks. If a rollout hook call fails the canary advancement is paused and eventfully rolled back.
    confirm-traffic-increase hooks are executed right before the weight on the canary is increased. The canary advancement is paused until this hook returns HTTP 200.
    confirm-promotion hooks are executed before the promotion step. The canary promotion is paused until the hooks return HTTP 200. While the promotion is paused, Flagger will continue to run the metrics checks and rollout hooks.
    post-rollout hooks are executed after the canary has been promoted or rolled back. If a post rollout hook fails the error is logged.
    rollback hooks are executed while a canary deployment is in either Progressing or Waiting status. This provides the ability to rollback during analysis or while waiting for a confirmation. If a rollback hook returns a successful HTTP status code, Flagger will stop the analysis and mark the canary release as failed.
    event hooks are executed every time Flagger emits a Kubernetes event. When configured, every action that Flagger takes during a canary deployment will be sent as JSON via an HTTP POST request.
Spec:
1
analysis:
2
webhooks:
3
- name: "start gate"
4
type: confirm-rollout
5
url: http://flagger-loadtester.test/gate/approve
6
- name: "helm test"
7
type: pre-rollout
8
url: http://flagger-helmtester.flagger/
9
timeout: 3m
10
metadata:
11
type: "helmv3"
12
cmd: "test podinfo -n test"
13
- name: "load test"
14
type: rollout
15
url: http://flagger-loadtester.test/
16
timeout: 15s
17
metadata:
18
cmd: "hey -z 1m -q 5 -c 2 http://podinfo-canary.test:9898/"
19
- name: "traffic increase gate"
20
type: confirm-traffic-increase
21
url: http://flagger-loadtester.test/gate/approve
22
- name: "promotion gate"
23
type: confirm-promotion
24
url: http://flagger-loadtester.test/gate/approve
25
- name: "notify"
26
type: post-rollout
27
url: http://telegram.bot:8080/
28
timeout: 5s
29
metadata:
30
some: "message"
31
- name: "rollback gate"
32
type: rollback
33
url: http://flagger-loadtester.test/rollback/check
34
- name: "send to Slack"
35
type: event
36
url: http://event-recevier.notifications/slack
37
metadata:
38
environment: "test"
39
cluster: "flagger-test"
Copied!
Note that the sum of all rollout webhooks timeouts should be lower than the analysis interval.
Webhook payload (HTTP POST):
1
{
2
"name": "podinfo",
3
"namespace": "test",
4
"phase": "Progressing",
5
"metadata": {
6
"test": "all",
7
"token": "16688eb5e9f289f1991c"
8
}
9
}
Copied!
Response status codes:
    200-202 - advance canary by increasing the traffic weight
    timeout or non-2xx - halt advancement and increment failed checks
On a non-2xx response Flagger will include the response body (if any) in the failed checks log and Kubernetes events.
Event payload (HTTP POST):
1
{
2
"name": "string (canary name)",
3
"namespace": "string (canary namespace)",
4
"phase": "string (canary phase)",
5
"metadata": {
6
"eventMessage": "string (canary event message)",
7
"eventType": "string (canary event type)",
8
"timestamp": "string (unix timestamp ms)"
9
}
10
}
Copied!
The event receiver can create alerts based on the received phase (possible values: Initialized, Waiting, Progressing, Promoting, Finalising, Succeeded or Failed).

Load Testing

For workloads that are not receiving constant traffic Flagger can be configured with a webhook, that when called, will start a load test for the target workload. If the target workload doesn't receive any traffic during the canary analysis, Flagger metric checks will fail with "no values found for metric request-success-rate".
Flagger comes with a load testing service based on rakyll/hey that generates traffic during analysis when configured as a webhook.
Flagger Load Testing Webhook
First you need to deploy the load test runner in a namespace with sidecar injection enabled:
1
kubectl apply -k https://github.com/fluxcd/flagger//kustomize/tester?ref=main
Copied!
Or by using Helm:
1
helm repo add flagger https://flagger.app
2
3
helm upgrade -i flagger-loadtester flagger/loadtester \
4
--namespace=test \
5
--set cmd.timeout=1h
Copied!
When deployed the load tester API will be available at http://flagger-loadtester.test/.
Now you can add webhooks to the canary analysis spec:
1
webhooks:
2
- name: load-test-get
3
url: http://flagger-loadtester.test/
4
timeout: 5s
5
metadata:
6
type: cmd
7
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
8
- name: load-test-post
9
url: http://flagger-loadtester.test/
10
timeout: 5s
11
metadata:
12
type: cmd
13
cmd: "hey -z 1m -q 10 -c 2 -m POST -d '{test: 2}' http://podinfo-canary.test:9898/echo"
Copied!
When the canary analysis starts, Flagger will call the webhooks and the load tester will run the hey commands in the background, if they are not already running. This will ensure that during the analysis, the podinfo-canary.test service will receive a steady stream of GET and POST requests.
If your workload is exposed outside the mesh you can point hey to the public URL and use HTTP2.
1
webhooks:
2
- name: load-test-get
3
url: http://flagger-loadtester.test/
4
timeout: 5s
5
metadata:
6
type: cmd
7
cmd: "hey -z 1m -q 10 -c 2 -h2 https://podinfo.example.com/"
Copied!
For gRPC services you can use bojand/ghz which is a similar tool to Hey but for gRPC:
1
webhooks:
2
- name: grpc-load-test
3
url: http://flagger-loadtester.test/
4
timeout: 5s
5
metadata:
6
type: cmd
7
cmd: "ghz -z 1m -q 10 -c 2 --insecure podinfo.test:9898"
Copied!
ghz uses reflection to identify which gRPC method to call. If you do not wish to enable reflection for your gRPC service you can implement a standardized health check from the grpc-proto library. To use this health check schema without reflection you can pass a parameter to ghz like this
1
webhooks:
2
- name: grpc-load-test-no-reflection
3
url: http://flagger-loadtester.test/
4
timeout: 5s
5
metadata:
6
type: cmd
7
cmd: "ghz --insecure --proto=/tmp/ghz/health.proto --call=grpc.health.v1.Health/Check podinfo.test:9898"
Copied!
The load tester can run arbitrary commands as long as the binary is present in the container image. For example if you want to replace hey with another CLI, you can create your own Docker image:
1
FROM weaveworks/flagger-loadtester:<VER>
2
3
RUN curl -Lo /usr/local/bin/my-cli https://github.com/user/repo/releases/download/ver/my-cli \
4
&& chmod +x /usr/local/bin/my-cli
Copied!

Load Testing Delegation

The load tester can also forward testing tasks to external tools, by now nGrinder is supported.
To use this feature, add a load test task of type 'ngrinder' to the canary analysis spec:
1
webhooks:
2
- name: load-test-post
3
url: http://flagger-loadtester.test/
4
timeout: 5s
5
metadata:
6
# type of this load test task, cmd or ngrinder
7
type: ngrinder
8
# base url of your nGrinder controller server
9
server: http://ngrinder-server:port
10
# id of the test to clone from, the test must have been defined.
11
clone: 100
12
# user name and base64 encoded password to authenticate against the nGrinder server
13
username: admin
14
passwd: YWRtaW4=
15
# the interval between between nGrinder test status polling, default to 1s
16
pollInterval: 5s
Copied!
When the canary analysis starts, the load tester will initiate a clone_and_start request to the nGrinder server and start a new performance test. the load tester will periodically poll the nGrinder server for the status of the test, and prevent duplicate requests from being sent in subsequent analysis loops.

Integration Testing

Flagger comes with a testing service that can run Helm tests, Bats tests or Concord tests when configured as a webhook.
Deploy the Helm test runner in the kube-system namespace using the tiller service account:
1
helm repo add flagger https://flagger.app
2
3
helm upgrade -i flagger-helmtester flagger/loadtester \
4
--namespace=kube-system \
5
--set serviceAccountName=tiller
Copied!
When deployed the Helm tester API will be available at http://flagger-helmtester.kube-system/.
Now you can add pre-rollout webhooks to the canary analysis spec:
1
analysis:
2
webhooks:
3
- name: "smoke test"
4
type: pre-rollout
5
url: http://flagger-helmtester.kube-system/
6
timeout: 3m
7
metadata:
8
type: "helm"
9
cmd: "test {{ .Release.Name }} --cleanup"
Copied!
When the canary analysis starts, Flagger will call the pre-rollout webhooks before routing traffic to the canary. If the helm test fails, Flagger will retry until the analysis threshold is reached and the canary is rolled back.
If you are using Helm v3, you'll have to create a dedicated service account and add the release namespace to the test command:
1
analysis:
2
webhooks:
3
- name: "smoke test"
4
type: pre-rollout
5
url: http://flagger-helmtester.kube-system/
6
timeout: 3m
7
metadata:
8
type: "helmv3"
9
cmd: "test {{ .Release.Name }} --timeout 3m -n {{ .Release.Namespace }}"
Copied!
If the test hangs or logs error messages hinting to insufficient permissions it can be related to RBAC, check the Troubleshooting section for an example configuration.
As an alternative to Helm you can use the Bash Automated Testing System to run your tests.
1
analysis:
2
webhooks:
3
- name: "acceptance tests"
4
type: pre-rollout
5
url: http://flagger-batstester.default/
6
timeout: 5m
7
metadata:
8
type: "bash"
9
cmd: "bats /tests/acceptance.bats"
Copied!
Note that you should create a ConfigMap with your Bats tests and mount it inside the tester container.
You can also configure the test runner to start a Concord process.
1
analysis:
2
webhooks:
3
- name: "concord integration test"
4
type: pre-rollout
5
url: http://flagger-concordtester.default/
6
timeout: 60s
7
metadata:
8
type: "concord"
9
org: "your-concord-org"
10
project: "your-concord-project"
11
repo: "your-concord-repo"
12
entrypoint: "your-concord-entrypoint"
13
apiKeyPath: "/tmp/concord-api-key"
14
endpoint: "https://canary-endpoint/"
15
pollInterval: "5"
16
pollTimeout: "60"
Copied!
org, project, repo and entrypoint represents where your test process runs in Concord. In order to authenticate to Concord, you need to set apiKeyPath to a path of a file containing a valid Concord API key on the flagger-helmtester container. This can be done via mounting a Kubernetes secret in the tester's Deployment. pollInterval represents the interval in seconds the web-hook will call Concord to see if the process has finished (Default is 5s). pollTimeout represents the time in seconds the web-hook will try to call Concord before timing out (Default is 30s).

Manual Gating

For manual approval of a canary deployment you can use the confirm-rollout and confirm-promotion webhooks. The confirmation rollout hooks are executed before the pre-rollout hooks. For manually approving traffic weight increase, you can use the confirm-traffic-increase webhook. Flagger will halt the canary traffic shifting and analysis until the confirm webhook returns HTTP status 200.
For manual rollback of a canary deployment you can use the rollback webhook. The rollback hook will be called during the analysis and confirmation states. If a rollback webhook returns a successful HTTP status code, Flagger will shift all traffic back to the primary instance and fail the canary.
Manual gating with Flagger's tester:
1
analysis:
2
webhooks:
3
- name: "gate"
4
type: confirm-rollout
5
url: http://flagger-loadtester.test/gate/halt
Copied!
The /gate/halt returns HTTP 403 thus blocking the rollout.
If you have notifications enabled, Flagger will post a message to Slack or MS Teams if a canary rollout is waiting for approval.
The notifications can be disabled with:
1
analysis:
2
webhooks:
3
- name: "gate"
4
type: confirm-rollout
5
url: http://flagger-loadtester.test/gate/halt
6
muteAlert: true
Copied!
Change the URL to /gate/approve to start the canary analysis:
1
analysis:
2
webhooks:
3
- name: "gate"
4
type: confirm-rollout
5
url: http://flagger-loadtester.test/gate/approve
Copied!
Manual gating can be driven with Flagger's tester API. Set the confirmation URL to /gate/check:
1
analysis:
2
webhooks:
3
- name: "ask for confirmation"
4
type: confirm-rollout
5
url: http://flagger-loadtester.test/gate/check
Copied!
By default the gate is closed, you can start or resume the canary rollout with:
1
kubectl -n test exec -it flagger-loadtester-xxxx-xxxx sh
2
3
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/open
Copied!
You can pause the rollout at any time with:
1
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/gate/close
Copied!
If a canary analysis is paused the status will change to waiting:
1
kubectl get canary/podinfo
2
3
NAME STATUS WEIGHT
4
podinfo Waiting 0
Copied!
The confirm-promotion hook type can be used to manually approve the canary promotion. While the promotion is paused, Flagger will continue to run the metrics checks and load tests.
1
analysis:
2
webhooks:
3
- name: "promotion gate"
4
type: confirm-promotion
5
url: http://flagger-loadtester.test/gate/halt
Copied!
The rollback hook type can be used to manually rollback the canary promotion. As with gating, rollbacks can be driven with Flagger's tester API by setting the rollback URL to /rollback/check
1
analysis:
2
webhooks:
3
- name: "rollback"
4
type: rollback
5
url: http://flagger-loadtester.test/rollback/check
Copied!
By default, rollback is closed, you can rollback a canary rollout with:
1
kubectl -n test exec -it flagger-loadtester-xxxx-xxxx sh
2
3
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/rollback/open
Copied!
You can close the rollback with:
1
curl -d '{"name": "podinfo","namespace":"test"}' http://localhost:8080/rollback/close
Copied!
If you have notifications enabled, Flagger will post a message to Slack or MS Teams if a canary has been rolled back.

Troubleshooting

Manually check if helm test is running

To debug in depth any issues with helm tests, you can execute commands on the flagger-loadtester pod.
1
kubectl exec -it deploy/flagger-loadtester -- bash
2
helmv3 test <release> -n <namespace> --debug
Copied!

Helm tests hang during canary deployment

If test execution hangs or displays insufficient permissions, check your RBAC settings.
1
---
2
apiVersion: rbac.authorization.k8s.io/v1
3
kind: ClusterRole
4
metadata:
5
name: helm-smoke-tester
6
rules:
7
- apiGroups: [""]
8
resources: ["secrets"]
9
verbs: ["get", "watch", "list", "update"]
10
# choose the permission based on helm test type (Pod or Job)
11
- apiGroups: [""]
12
resources: ["pods", "pods/log"]
13
verbs: ["create", "list", "delete", "watch"]
14
- apiGroups: ["batch"]
15
resources: ["jobs", "jobs/log"]
16
verbs: ["create", "list", "delete", "watch"]
17
---
18
apiVersion: rbac.authorization.k8s.io/v1
19
kind: RoleBinding
20
metadata:
21
name: helm-smoke-tester
22
# Don't forget to update accordingly
23
namespace: namespace-of-the-tested-release
24
subjects:
25
- kind: User
26
name: system:serviceaccount:linkerd:default
27
apiGroup: rbac.authorization.k8s.io
28
roleRef:
29
kind: ClusterRole
30
name: helm-smoke-tester
31
apiGroup: rbac.authorization.k8s.io
Copied!
Last modified 5mo ago