Flagger implements the following deployment strategies:
​Canary Release​
​A/B Testing​
​Blue/Green​
​Blue/Green Mirroring​
For frontend applications that require session affinity you should use HTTP headers or cookies match conditions to ensure a set of users will stay on the same version for the whole duration of the canary analysis.
For applications that are not deployed on a service mesh, Flagger can orchestrate Blue/Green style deployments with Kubernetes L4 networking.
Traffic mirroring can be used for Blue/Green deployment strategy or a pre-stage in a Canary release. Traffic mirroring will copy each incoming request, sending one request to the primary and one to the canary service. Mirroring should be used for requests that are idempotent or capable of being processed twice (once by the primary and once by the canary).
A canary analysis is triggered by changes in any of the following objects:
Deployment/DaemonSet PodSpec (metadata, container image, command, ports, env, resources, etc)
ConfigMaps mounted as volumes or mapped to environment variables
Secrets mounted as volumes or mapped to environment variables
To retry a release you can add or change an annotation on the pod template:
apiVersion: apps/v1kind: Deploymentspec:template:metadata:annotations:timestamp: "2020-03-10T14:24:48+0000"
It is the intended behavior when the analysis is disabled, this allows instant rollback and also mimics the way a Kubernetes deployment initialization works.
To avoid this: enable the analysis (skipAnalysis: true
), wait for the initialization to finish, and disable it afterward (skipAnalysis: false
).
Assuming the app name is podinfo you can define a canary like:
apiVersion: flagger.app/v1beta1kind: Canarymetadata:name: podinfonamespace: testspec:targetRef:apiVersion: apps/v1kind: Deploymentname: podinfoservice:# service name (optional)name: podinfo# ClusterIP port number (required)port: 9898# container port name or numbertargetPort: http# port name can be http or grpc (default http)portName: http
If the service.name
is not specified, then targetRef.name
is used for the apex domain and canary/primary services name prefix. You should treat the service name as an immutable field, changing it could result in routing conflicts.
Based on the canary spec service, Flagger generates the following Kubernetes ClusterIP service:
<service.name>.<namespace>.svc.cluster.local
selector app=<name>-primary
<service.name>-primary.<namespace>.svc.cluster.local
selector app=<name>-primary
<service.name>-canary.<namespace>.svc.cluster.local
selector app=<name>
This ensures that traffic coming from a namespace outside the mesh to podinfo.test:9898
will be routed to the latest stable release of your app.
apiVersion: v1kind: Servicemetadata:name: podinfospec:type: ClusterIPselector:app: podinfo-primaryports:- name: httpport: 9898protocol: TCPtargetPort: http---apiVersion: v1kind: Servicemetadata:name: podinfo-primaryspec:type: ClusterIPselector:app: podinfo-primaryports:- name: httpport: 9898protocol: TCPtargetPort: http---apiVersion: v1kind: Servicemetadata:name: podinfo-canaryspec:type: ClusterIPselector:app: podinfoports:- name: httpport: 9898protocol: TCPtargetPort: http
The podinfo-canary.test:9898
address is available only during the canary analysis and can be used for conformance testing or load testing.
If port discovery is enabled, Flagger scans the deployment spec and extracts the containers ports excluding the port specified in the canary service and Envoy sidecar ports. These ports will be used when generating the ClusterIP services.
For a deployment that exposes two ports:
apiVersion: apps/v1kind: Deploymentspec:template:metadata:annotations:prometheus.io/scrape: "true"prometheus.io/port: "9899"spec:containers:- name: appports:- containerPort: 8080- containerPort: 9090
You can enable port discovery so that Prometheus will be able to reach port 9090
over mTLS:
apiVersion: flagger.app/v1beta1kind: Canaryspec:service:# container port used for canary analysisport: 8080# port name can be http or grpc (default http)portName: http# add all the other container ports# to the ClusterIP services (default false)portDiscovery: truetrafficPolicy:tls:mode: ISTIO_MUTUAL
Both port 8080
and 9090
will be added to the ClusterIP services.
The target deployment must have a single label selector in the format app: <DEPLOYMENT-NAME>
:
apiVersion: apps/v1kind: Deploymentmetadata:name: podinfospec:selector:matchLabels:app: podinfotemplate:metadata:labels:app: podinfo
Besides app
Flagger supports name
and app.kubernetes.io/name
selectors. If you use a different convention you can specify your label with the -selector-labels
flag.
Flagger will rewrite the first value in each match expression, defined in the target deployment's pod anti-affinity and topology spread constraints, satisfying the following two requirements when creating, or updating, the primary deployment:
The key in the match expression must be one of the labels specified by the parameter selector-labels.
The default labels are app
,name
,app.kubernetes.io/name
.
The value must match the name of the target deployment.
The rewrite done by Flagger in these cases is to suffix the value with -primary
. This rewrite can be used to spread the pods created by the canary and primary deployments across different availability zones.
Example target deployment:
apiVersion: apps/v1kind: Deploymentmetadata:name: podinfospec:selector:matchLabels:app: podinfotemplate:metadata:labels:app: podinfospec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- podinfotopologyKey: topology.kubernetes.io/zone
Example of generated primary deployment:
apiVersion: apps/v1kind: Deploymentmetadata:name: podinfo-primaryspec:selector:matchLabels:app: podinfo-primarytemplate:metadata:labels:app: podinfo-primaryspec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchExpressions:- key: appoperator: Invalues:- podinfo-primarytopologyKey: topology.kubernetes.io/zone
It is also possible to use a different label than the app
, name
or app.kubernetes.io/name
.
Anti affinity example(using a different label):
apiVersion: apps/v1kind: Deploymentmetadata:name: podinfospec:selector:matchLabels:app: podinfoaffinity: podinfotemplate:metadata:labels:app: podinfoaffinity: podinfospec:affinity:podAntiAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 100podAffinityTerm:labelSelector:matchLabels:affinity: podinfotopologyKey: topology.kubernetes.io/zone
Flagger measures the request success rate and duration using Prometheus queries.
Spec:
analysis:metrics:- name: request-success-rate# minimum req success rate (non 5xx responses)# percentage (0-100)thresholdRange:min: 99interval: 1m
Istio query:
sum(rate(istio_requests_total{reporter="destination",destination_workload_namespace=~"$namespace",destination_workload=~"$workload",response_code!~"5.*"}[$interval]))/sum(rate(istio_requests_total{reporter="destination",destination_workload_namespace=~"$namespace",destination_workload=~"$workload"}[$interval]))
Envoy query (App Mesh):
sum(rate(envoy_cluster_upstream_rq{kubernetes_namespace="$namespace",kubernetes_pod_name=~"$workload",envoy_response_code!~"5.*"}[$interval]))/sum(rate(envoy_cluster_upstream_rq{kubernetes_namespace="$namespace",kubernetes_pod_name=~"$workload"}[$interval]))
Envoy query (Contour and Gloo):
sum(rate(envoy_cluster_upstream_rq{envoy_cluster_name=~"$namespace-$workload",envoy_response_code!~"5.*"}[$interval]))/sum(rate(envoy_cluster_upstream_rq{envoy_cluster_name=~"$namespace-$workload",}[$interval]))
Spec:
analysis:metrics:- name: request-duration# maximum req duration P99# millisecondsthresholdRange:max: 500interval: 1m
Istio query:
histogram_quantile(0.99,sum(irate(istio_request_duration_seconds_bucket{reporter="destination",destination_workload=~"$workload",destination_workload_namespace=~"$namespace"}[$interval])) by (le))
Envoy query (App Mesh, Contour and Gloo):
histogram_quantile(0.99,sum(irate(envoy_cluster_upstream_rq_time_bucket{kubernetes_pod_name=~"$workload",kubernetes_namespace=~"$namespace"}[$interval])) by (le))
Note that the metric interval should be lower or equal to the control loop interval.
The analysis can be extended with metrics provided by Prometheus, Datadog and AWS CloudWatch. For more details on how custom metrics can be used please read the metrics docs.
Flagger creates an Istio Virtual Service and Destination Rules based on the Canary service spec. The service configuration lets you expose an app inside or outside the mesh. You can also define traffic policies, HTTP match conditions, URI rewrite rules, CORS policies, timeout and retries.
The following spec exposes the frontend
workload inside the mesh on frontend.test.svc.cluster.local:9898
and outside the mesh on frontend.example.com
. You'll have to specify an Istio ingress gateway for external hosts.
apiVersion: flagger.app/v1beta1kind: Canarymetadata:name: frontendnamespace: testspec:service:# container portport: 9898# service port name (optional, will default to "http")portName: http-frontend# Istio gateways (optional)gateways:- public-gateway.istio-system.svc.cluster.local- mesh# Istio virtual service host names (optional)hosts:- frontend.example.com# Istio traffic policytrafficPolicy:tls:# use ISTIO_MUTUAL when mTLS is enabledmode: DISABLE# HTTP match conditions (optional)match:- uri:prefix: /# HTTP rewrite (optional)rewrite:uri: /# Istio retry policy (optional)retries:attempts: 3perTryTimeout: 1sretryOn: "gateway-error,connect-failure,refused-stream"# Add headers (optional)headers:request:add:x-some-header: "value"# cross-origin resource sharing policy (optional)corsPolicy:allowOrigin:- example.comallowMethods:- GETallowCredentials: falseallowHeaders:- x-some-headermaxAge: 24h
For the above spec Flagger will generate the following virtual service:
apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:name: frontendnamespace: testownerReferences:- apiVersion: flagger.app/v1beta1blockOwnerDeletion: truecontroller: truekind: Canaryname: podinfouid: 3a4a40dd-3875-11e9-8e1d-42010a9c0fd1spec:gateways:- public-gateway.istio-system.svc.cluster.local- meshhosts:- frontend.example.com- frontendhttp:- corsPolicy:allowHeaders:- x-some-headerallowMethods:- GETallowOrigin:- example.commaxAge: 24hheaders:request:add:x-some-header: "value"match:- uri:prefix: /rewrite:uri: /route:- destination:host: podinfo-primaryweight: 100- destination:host: podinfo-canaryweight: 0retries:attempts: 3perTryTimeout: 1sretryOn: "gateway-error,connect-failure,refused-stream"
For each destination in the virtual service a rule is generated:
apiVersion: networking.istio.io/v1alpha3kind: DestinationRulemetadata:name: frontend-primarynamespace: testspec:host: frontend-primarytrafficPolicy:tls:mode: DISABLE---apiVersion: networking.istio.io/v1alpha3kind: DestinationRulemetadata:name: frontend-canarynamespace: testspec:host: frontend-canarytrafficPolicy:tls:mode: DISABLE
Flagger keeps in sync the virtual service and destination rules with the canary service spec. Any direct modification to the virtual service spec will be overwritten.
To expose a workload inside the mesh on http://backend.test.svc.cluster.local:9898
, the service spec can contain only the container port and the traffic policy:
apiVersion: flagger.app/v1beta1kind: Canarymetadata:name: backendnamespace: testspec:service:port: 9898trafficPolicy:tls:mode: DISABLE
Based on the above spec, Flagger will create several ClusterIP services like:
apiVersion: v1kind: Servicemetadata:name: backend-primaryownerReferences:- apiVersion: flagger.app/v1beta1blockOwnerDeletion: truecontroller: truekind: Canaryname: backenduid: 2ca1a9c7-2ef6-11e9-bd01-42010a9c0145spec:type: ClusterIPports:- name: httpport: 9898protocol: TCPtargetPort: 9898selector:app: backend-primary
Flagger works for user facing apps exposed outside the cluster via an ingress gateway and for backend HTTP APIs that are accessible only from inside the mesh.
If Delegation
is enabled, Flagger would generate Istio VirtualService without hosts and gateway, making the service compatible with Istio delegation.
apiVersion: flagger.app/v1beta1kind: Canarymetadata:name: backendnamespace: testspec:service:delegation: trueport: 9898targetRef:apiVersion: v1kind: Deploymentname: podinfoanalysis:interval: 15sthreshold: 15maxWeight: 30stepWeight: 10
Based on the above spec, Flagger will create the following virtual service:
apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:name: backendnamespace: testownerReferences:- apiVersion: flagger.app/v1beta1blockOwnerDeletion: truecontroller: truekind: Canaryname: backenduid: 58562662-5e10-4512-b269-2b789c1b30fespec:http:- route:- destination:host: podinfo-primaryweight: 100- destination:host: podinfo-canaryweight: 0
Therefore, The following virtual service forward the traffic to /podinfo
by the above delegate VirtualService.
apiVersion: networking.istio.io/v1alpha3kind: VirtualServicemetadata:name: frontendnamespace: testspec:gateways:- public-gateway.istio-system.svc.cluster.local- meshhosts:- frontend.example.com- frontendhttp:- match:- uri:prefix: /podinforewrite:uri: /delegate:name: backendnamespace: test
Note that pilot env PILOT_ENABLE_VIRTUAL_SERVICE_DELEGATE
must also be set. For the use of Istio Delegation, you can refer to the documentation of Virtual Service and pilot environment variables.
Assuming you have two apps, one that servers the main website and one that serves the REST API. For each app you can define a canary object as:
apiVersion: flagger.app/v1beta1kind: Canarymetadata:name: websitespec:service:port: 8080gateways:- public-gateway.istio-system.svc.cluster.localhosts:- my-site.commatch:- uri:prefix: /rewrite:uri: /---apiVersion: flagger.app/v1beta1kind: Canarymetadata:name: webapispec:service:port: 8080gateways:- public-gateway.istio-system.svc.cluster.localhosts:- my-site.commatch:- uri:prefix: /apirewrite:uri: /
Based on the above configuration, Flagger will create two virtual services bounded to the same ingress gateway and external host. Istio Pilot will merge the two services and the website rule will be moved to the end of the list in the merged configuration.
Note that host merging only works if the canaries are bounded to a ingress gateway other than the mesh
gateway.
When deploying Istio with global mTLS enabled, you have to set the TLS mode to ISTIO_MUTUAL
:
apiVersion: flagger.app/v1beta1kind: Canaryspec:service:trafficPolicy:tls:mode: ISTIO_MUTUAL
If you run Istio in permissive mode you can disable TLS:
apiVersion: flagger.app/v1beta1kind: Canaryspec:service:trafficPolicy:tls:mode: DISABLE
In order for Flagger to be able to call the load tester service from outside the mesh, you need to disable mTLS:
apiVersion: networking.istio.io/v1beta1kind: DestinationRulemetadata:name: flagger-loadtesternamespace: testspec:host: "flagger-loadtester.test.svc.cluster.local"trafficPolicy:tls:mode: DISABLE---apiVersion: security.istio.io/v1beta1kind: PeerAuthenticationmetadata:name: flagger-loadtesternamespace: testspec:selector:matchLabels:app: flagger-loadtestermtls:mode: DISABLE