Deploying Kubernetes clusters across availability zones can offer significant reliability benefits, especially when you use Istio for application routing and load balancing. If you have built redundant failure domains in separate zones, the mesh can automatically shift traffic to another zone should one zone fail. Istio’s locality-aware load balancing can also help reduce latency and cross-zone traffic charges from your cloud provider by keeping traffic within the same zone as much as possible.
However, there is one case where Istio’s locality-aware routing can’t help: the routing between Istio’s Envoy sidecar proxies and the Istio control plane, istiod. In the simple case, sidecars and istiod will be in the same availability zone (AZ). However, in clusters that span availability zones, you may encounter unexpectedly high charges from your cloud provider generated by cross-zone traffic between sidecars in one availability zone and istiod in another. Depending on the application, and the locations involved, these costs can quickly add up.
It may be tempting to scale istiod so that there’s an instance of it in every AZ and use Istio’s locality-aware load balancing to keep traffic local between sidecars and istiod. However, if this is not done carefully, it won’t work as you may expect.
In this article, we’ll explore ways to minimize cross-zone traffic between sidecars and multiple instances of istiod in a multi-zone cluster and reduce those cross-zone data charges.
How Sidecars Select and Connect to istiod
Istiod runs as a Kubernetes Service which exposes its implementation of Envoy’s xDS API via gRPC. Envoy sidecars use static bootstrap configuration to connect to those xds-grpc services. Since this happens before Envoy has access to Istio’s runtime configuration, sidecars can’t take advantage of Istio’s locality-based load balancing to select and connect to a local instance of istiod. Instead, because they connect to istiod through Kubernetes Service, their connections are load balanced by kube-proxy which, in a multi-zone cluster, may be insensitive to zone locality. As a result, by default, it’s likely that some sidecars will connect to istiod across availability zones, leading to cross-zone traffic charges, no matter how you scale istiod.
Kubernetes Service Enhancement Features
Kubernetes itself offers features to enhance the native Kubernetes Service to ensure traffic locality. In older versions of Kubernetes, topology-aware traffic routing could be achieved with topology keys. Since Kubernetes 1.21, topology keys have been deprecated in favor of topology aware hints. In the following tutorial, we’ll show how to configure a multi-zone cluster so that sidecars connect to an instance of istiod within their availability zone.
We’ll start with a manually-deployed Kubernetes cluster in AWS to demonstrate the problem (Figure 1).
ubuntu@ip-172-20-32-30:~$ kubectl get nodes -L topology.kubernetes.io/zone NAME STATUS ROLES AGE VERSION ZONE ip-172-20-32-131.us-west-2.compute.internal Ready worker 12h v1.24.0 us-west-2a ip-172-20-32-30.us-west-2.compute.internal Ready control-plane 22h v1.24.0 us-west-2a ip-172-20-33-215.us-west-2.compute.internal Ready worker 12h v1.24.0 us-west-2b ip-172-20-34-7.us-west-2.compute.internal Ready worker 12h v1.24.0 us-west-2c ubuntu@ip-172-20-32-30:~$
Figure 1: A multi-zone Kubernetes cluster in AWS
Scale istiod to Every Zone
As you can see from Figure 1, we have a Kubernetes cluster running in AWS with a worker in each availability zone. To deploy an instance of istiod in every availability zone and to ensure that all sidecars connect to their zone-local istiod instance, we can edit the istiod deployment to use Pod topology spread constraints as in Figure 2 below.
apiVersion: apps/v1 kind: Deployment metadata: name: istiod namespace: istio-system spec: template: spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: istiod ...
Figure 2: Updated istiod deployment configuration using pod topology spread constraints.
Once the istiod deployment is updated, we see from Figure 3 that the istiod pods are deployed such that there is an instance of istiod in each node (recall from Figure 1 that each node is in a different AZ).
ubuntu@ip-172-20-32-30:~$ kubectl get pods -o wide -n istio-system NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES istio-ingressgateway-7d46d7f9f9-bx8kv 1/1 Running 0 13h 192.168.0.1 ip-172-20-33-215.us-west-2.compute.internal <none> <none> istiod-7c8b6f7c8c-4djkv 1/1 Running 0 22m 192.168.0.210 ip-172-20-32-131.us-west-2.compute.internal <none> <none> istiod-7c8b6f7c8c-cdcbn 1/1 Running 0 21m 192.168.0.101 ip-172-20-34-7.us-west-2.compute.internal <none> <none> istiod-7c8b6f7c8c-kkg8s 1/1 Running 0 22m 192.168.0.11 ip-172-20-33-215.us-west-2.compute.internal <none> <none>
Figure 3: Updated istiod deployment with a pod in every zone.
Let’s deploy the bookinfo app and scale the details-v1 deployment to have five replicas in us-west-2 (Figure 4).
apiVersion: apps/v1 kind: Deployment metadata: labels: app: details version: v1 name: details-v1 namespace: bookinfo spec: template: spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - us-west-2
Figure 4: The Bookinfo application deployment with five replicas of details-v1 in us-west-2
Per Figure 5, you can see all the details-v1 Pods are on the same node (which we can see from Figure 1 is in us-west-2c).
ubuntu@ip-172-20-32-30:~$ kubectl get pods -n bookinfo -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES details-v1-5484d4cf78-24zn9 2/2 Running 0 3h38m 192.168.0.98 ip-172-20-34-7.us-west-2.compute.internal <none> <none> details-v1-5484d4cf78-bvbdl 2/2 Running 0 3h38m 192.168.0.97 ip-172-20-34-7.us-west-2.compute.internal <none> <none> details-v1-5484d4cf78-hs9zj 2/2 Running 0 3h38m 192.168.0.96 ip-172-20-34-7.us-west-2.compute.internal <none> <none> details-v1-5484d4cf78-kg4h2 2/2 Running 0 3h38m 192.168.0.91 ip-172-20-34-7.us-west-2.compute.internal <none> <none> details-v1-5484d4cf78-rnqjp 2/2 Running 0 3h38m 192.168.0.93 ip-172-20-34-7.us-west-2.compute.internal <none> <none>
Figure 5: Five details-v1 pods in the us-west-2 node
Checking the current proxy status using `istioctl ps`, we find that each details-v1 Pod connects to a different instance of istiod, which is not what we want. Sadly, in this case, we were unlucky and none of them connect to the istiod instance in their same zone.
ubuntu@ip-172-20-32-30:~$ ./istioctl ps NAME CLUSTER CDS LDS EDS RDS ISTIOD VERSION details-v1-59595759fc-6fs6f.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-4djkv 1.13.3-tetrate-v0 details-v1-59595759fc-9pzbk.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-kkg8s 1.13.3-tetrate-v0 details-v1-59595759fc-cfzfv.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-kkg8s 1.13.3-tetrate-v0 details-v1-59595759fc-dxpsh.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-4djkv 1.13.3-tetrate-v0 details-v1-59595759fc-fnrqg.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-kkg8s 1.13.3-tetrate-v0
Figure 6: Output of `istioctl ps`
Add Topology Aware Hints
Let’s remedy this by adding topology aware hints on our istiod Service, as shown in Figure 7 below:
apiVersion: v1 kind: Service metadata: annotations: service.kubernetes.io/topology-aware-hints: auto name: istiod namespace: istio-system
Figure 7: Adding topology aware hints to the istiod Service configuration
When we restart the details-v1 deployment and check again with `istioctl ps` (Figure 8), we can see they are connected to the desired instance of istiod in the same zone as each instance of details-v1.
ubuntu@ip-172-20-32-30:~$ ./istioctl ps NAME CLUSTER CDS LDS EDS RDS ISTIOD VERSION details-v1-8ddf9cf44-6vvfr.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-cdcbn 1.13.3-tetrate-v0 details-v1-8ddf9cf44-8jsv6.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-cdcbn 1.13.3-tetrate-v0 details-v1-8ddf9cf44-t4ftd.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-cdcbn 1.13.3-tetrate-v0 details-v1-8ddf9cf44-wn485.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-cdcbn 1.13.3-tetrate-v0 details-v1-8ddf9cf44-wtg8d.bookinfo Kubernetes SYNCED SYNCED SYNCED SYNCED istiod-7c8b6f7c8c-cdcbn 1.13.3-tetrate-v0
Figure 8: Output of `istioctl ps` after adding topology aware hints
Note: if you use Kubernetes earlier than version 1.24, you need to enable topology aware hints using the feature gate. Unfortunately, the feature gate is not available in most managed Kubernetes environments, since you don’t have access to the Kubernetes control plane.
Conclusion
Deploying Kubernetes clusters across availability zones can offer significant reliability benefits, but requires some extra configuration to keep traffic local. With the additional configuration steps described in this article, Istio’s locality-aware routing can help reduce latency and minimize cross-zone data charges from cloud providers for your application traffic. We hope this tutorial on how to ensure locality of traffic between Istio’s data plane and control plane will help you squeeze out even more latency and cost from your cross-zone deployments.
###
If you’re new to service mesh, Tetrate has a bunch of free online courses available at Tetrate Academy that will quickly get you up to speed with Istio and Envoy.
Are you using Kubernetes? Tetrate Enterprise Gateway for Envoy (TEG) is the easiest way to get started with Envoy Gateway for production use cases. Get the power of Envoy Proxy in an easy-to-consume package managed by the Kubernetes Gateway API. Learn more ›
Getting started with Istio? If you’re looking for the surest way to get to production with Istio, check out Tetrate Istio Subscription. Tetrate Istio Subscription has everything you need to run Istio and Envoy in highly regulated and mission-critical production environments. It includes Tetrate Istio Distro, a 100% upstream distribution of Istio and Envoy that is FIPS-verified and FedRAMP ready. For teams requiring open source Istio and Envoy without proprietary vendor dependencies, Tetrate offers the ONLY 100% upstream Istio enterprise support offering.
Get a Demo