The Amazon Elastic Container Service for Kubernetes, EKS, is the long-awaited hosted Kubernetes offering from AWS. It offers managed Kubernetes 1.10 clusters, and is currently GA in two regions in North America.

Building on Kubernetes, Istio is a service mesh that provides “An open platform to connect, manage, and secure microservices.” Here at Tetrate we’re active contributors to Istio, and we dogfood the latest nightly versions in all of our own infrastructure.

When we heard that EKS was generally available, I thought I’d try spinning up the latest Istio build on it. I think this counts as living on the cloud-native edge! The good news is that it works, but there are a few caveats which I’ll detail in this post, with step-by-step mitigations. There were even a couple of upstream bugs I hit, but my fixes for them have already been merged ?


First, how do you install a nightly build? Releases of Istio are nicely packaged up into tarballs containing install artifacts, but no such thing exists for nightlies. Luckily, those tarballs really just contain Helm charts with the correct version number baked in; the actual “software” is a set of container images, containing the Istio binaries. These are downloaded from a public container image registry (like Docker Hub) and run in your Kubernetes cluster. Images based on the latest code are produced by a CI system every night, so all we need to do is install them; no need to compile Istio ourselves. That said, the Helm charts aren’t trivial, and do evolve to track changes in Istio proper, so we’ll use the nightly version of those as well, to ensure we don’t have any compatibility issues.

Let’s clone the Istio repository so that we have the latest Helm charts to hand:

go get -u
$ cd $GOPATH/src/

Istio follows a release-branch model of development, with a release-1.0 branch which is ahead of master:

git checkout release-1.0

Helm Chart

The Helm Chart is located at install/kubernetes/helm/istio. It contains a values.yaml file that specifies slightly the wrong image hub and tag for our purposes (Istio has two CI systems…), but we can easily override that on the command line, as we’ll see towards the end of this post.

A recent Istio commit made the Helm chart incompatible with anything but the latest Helm 2.10 RC; the version your package manager installs won’t work, so grab the latest pre-release from GitHub. Also note that the chart creates a new namespace, and deploys many resource types into it, so your Tiller install will need to have fairly powerful credentials (cluster-admin works ? ).

CRD Validation

One of the most noticeable things about the EKS “distribution” of Kubernetes is that Mutating and Validating Webhook Admission Controllers – basically the ability to define external code which can block or modify Resources applied to a Kubernetes cluster – aren’t enabled. Recent versions of Istio use a webhook admission controller to check that Istio resources conform to their spec. On EKS these won’t work, and worse, the failure mode is not what you might expect; any attempt to apply an Istio resource hangs (blocking kubectl). Istio actually applies default configuration to itself during installation (using a Helm post-install hook to run a Kubernetes Job), so this error means the Istio install process hangs. Eventually Helm times it out and reports a very generic error.

Luckily there’s a value in the latest Helm chart to disable installing this validator. This is one reason to use the latest nightly – this option wasn’t present in the single 1.0 RC 1.0.0-snapshot.0.

Automatic Sidecar Injection

Another Istio feature that uses webhook admission controllers is the automatic injection of sidecars. Rather than a validating webhook, that feature uses a mutating one to modify Pod definitions as they’re submitted to the Kubernetes apiserver.

With no webhook admission controllers, this is another feature that won’t work. Worse, like before, the failure mode is a hang – the Deployment controller makes a ReplicaSet, but the ReplicaSet controller fails to make Pods, posting an “internal error” to the cluster Events stream.

Luckily, istioctl can perform this injection client-side, so as I’ll show later, we can just do it that way instead.


In order to perform the sidecar injection, istioctl has to have some information about the mesh that the Pods will be joining. It finds this by reading a ConfigMap called istio from your Kubernetes cluster. This requires some authorization; it cannot be done by an anonymous user. EKS authenticates users against AWS IAM (using a server-side authn plugin); you log in to an EKS cluster using your AWS IAM principal, rather than a separate account. This is a really nice integration and makes a lot management headaches disappear. For kubectl to present your AWS identity to the EKS cluster, it calls a client-side plug-in called aws-iam-authenticator, which uses your “ambient” AWS credentials to get a token which the EKS authn server-side will accept.

However, plug-in support in kubectl was introduced in Kubernetes 1.10. Istioctl is based on kubectl (indeed, if the special kube-inject logic weren’t needed, kubectl would be sufficient for everything else we want to do here). The 0.8 release version of istioctlis based on kubectl 1.9, so we need to get a 1.10+ based version of it. This upgrade has happened on the release-1.0 branch, so let’s just build our own istioctl from there.

We already have the Istio repository cloned and the right branch checked out. Note that for this step it’s required that you cloned the repo from the vanity domain – the build scripts rely on it.

Istio doesn’t use the standard go build system; we have to use their Makefiles:

cd $GOPATH/src/
$ make istioctl

The binary will be built at $GOPATH/out/<platform>/release/istioctl; get that on your $PATH somehow.

Performing the Installation

We should now be ready to go!

If you don’t yet have an EKS cluster, you’ll need to provision one. How to do that is outside the scope of this article, but you could consider an option like eksctl, the AWS Console, or, Terraform. The instructions you follow should have information on installing the aws-iam-authenticator as well, if not, see here.

With your default kubeconfig context pointing at your EKS cluster, and aws-iam-authenticator on your path, we can install the Istio Helm chart.

 cd $GOPATH/src/

$ helm init --upgrade

$ declare args=(
    --name istio
    --namespace istio-system
    --set                  # Nightly builds are here
    --set global.tag=nightly-release-1.0        # This is the latest tag
    --set global.imagePullPolicy=Always         # Nightly tags move
    --set global.configValidation=false         # Work around broken WebhookAdmissionControllers
    --set sidecarInjectorWebhook.enabled=false  # Ditto

$ helm install ${args[@]}

Deploying BookInfo

To test your Istio installation, what better place to start than bookinfo! Remember that you’ll have to follow the “manual sidecar injection” branch of the official docs.