In this article, we’ll explore how to use Hashicorp Vault as a more secure way to store Istio certificates than using Kubernetes Secrets. By default, Secrets are stored in etcd using base64 encoding. In environments with stringent security policies, this might not be acceptable, so additional security measures are needed to protect them. One such solution involves storing secrets in an external secret store provider, like HashiCorp Vault.

Vault can be hosted both inside and outside a Kubernetes cluster. In this case, we will explore using Vault hosted outside kubernetes, so that it can provision secrets for multiple clusters at once. The setup is also ideal for exploring Istio’s multi-cluster feature, which requires a shared trust domain.

Leveraging the vault-agent-init container, we can inject certificates and private key material into the actual Istio control plane Pods so they are bootstrapped with the external CA certificates. This avoids the dependency on Secrets to bootstrap the Istio control plane. Exactly the same technique can be used for ingress and egress certificates.

More information on how certificates are used and managed within Istio, can be found in the official documentation:

For best practices based on real-life production experience, also check out the following Tetrate blog posts:

The code accompanying this blog post can be found at the following repository:

Istiod Certificate Handling

Although some of the decision logic is explained in the forementioned blogposts, it is worthwhile to also refer to the source code to find some undocumented behavior.

// istio/pilot/pkg/bootstrap/istio_ca.go
// For backward compat, will preserve support for the "cacerts" Secret used for self-signed certificates.
// It is mounted in the same location, and if found will be used - creating the secret is sufficient, no need for
// extra options.
// In old installer, the LocalCertDir is hardcoded to /etc/cacerts and mounted from "cacerts" secret.
// Support for signing other root CA has been removed - too dangerous, no clear use case.
// Default config, for backward compat with Citadel:
// - if "cacerts" secret exists in istio-system, will be mounted. It may contain an optional "root-cert.pem",
//   with additional roots and optional {ca-key, ca-cert, cert-chain}.pem user-provided root CA.
// - if user-provided root CA is not found, the Secret "istio-ca-secret" is used, with ca-cert.pem and ca-key.pem files.
// - if neither is found, istio-ca-secret will be created.
// - a config map "istio-security" with a "caTLSRootCert" file will be used for root cert, and created if needed.
//   The config map was used by node agent - no longer possible to use in sds-agent, but we still save it for
//   backward compat. Will be removed with the node-agent. sds-agent is calling NewCitadelClient directly, using
//   K8S root.

In order to instruct Istio to pick up our certificates elsewhere compared to the standard kubernetes secrets, we will leverage an environment variable (documented here) for istio-pilot (aka istiod or the Istio control plane), so certificates will be picked up from an alternative location within the Kubernetes Pod. This is needed because the vault-agent-init injection container will create a new mounted volume /vault/secrets to drop the certificates and private key we instrument it to pull from the external vault server.

Variable NameTypeDefault ValueDescription
ROOT_CA_DIRString/etc/cacertsLocation of a local or mounted CA root

Pod Annotations for the vault-agent-init Container

We will be leveraging Vault injector annotations to instruct the sidecar what data to pull and what Vault role to use when doing so. We also make sure the vault-agent-init container is run before our actual istiod main containers, so the latter can pick up the certificates and key material to bootstrap itself correctly. Vault annotations are enumerated and documented here. The relevant annotations we will be using in this tutorial are as follows:

AnnotationDefault ValueDescription“false”Configures whether injection is explicitly enabled or disabled for a Pod. This should be set to a true or false value.“false”Configures the Pod to run the Vault Agent init container first if true (last if false). This is useful when other init containers need pre-populated secrets. This should be set to a true or false value. the Vault role used by the Vault Agent auto-auth method. Required when is not set. the authentication path for the Kubernetes auth method. Defaults to auth/kubernetes. Vault Agent to retrieve the secrets from Vault required by the container. The name of the secret is any unique string after, such as The value is the path in Vault where the secret is located. the template Vault Agent should use for rendering a secret. The name of the template is any unique string after, such as This should map to the same unique value provided in If not provided, a default generic template is used.

Vault Server Considerations

Vault supports several methods for clients to authenticate themselves. We will be leveraging the Kubernetes auth backend, which means we will be leveraging Kubernetes ServiceAccount JWT token validation. Please note that ServiceAccount tokens are no longer automatically generated since Kubernetes 1.24. You can still create those API tokens manually, as documented here.

As to storage of our certificate and private key material we have two options:

Because the PKI secret engine does not provide clean-cut APIs to retrieve the certificates and the private key we need, and because the PKI secret engine will generate a new intermediate certificate for every call (e.g., every istiod restart), we will be using the generic KV secret engine instead, storing all the values we need in a simple key-value data structure. We will assume the renewal of intermediate certificates is handled out-of-band through some service portal or CI/CD process that will store the renewed intermediate certificates in the vault server as well.

Istio’s control plane Pods need the following files in order to bootstrap its build in CA correctly:

KeyValue (PEM encoded)Details
ca-key.pemCA private keyPrivate key of the intermediate cert, used as root CA for istiod.
ca-cert.pemCA public certificateIntermediate cert, used as root CA for istiod.
root-cert.pemCA root certificateThe root of trust of our newly generated intermediate cert.
cert-chain.pemFull certificate chainIntermediate cert at the top, root cert at the bottom.


Prerequisites in terms of installed software, if you want to follow the local set-up, include:

  • kubectl to interact with the kubernetes clusters (download)
  • helm to install vault injector and istio charts (download)
  • vault cli tool to configure the vault server (download)

If you want a local demo environment, please follow the instructions here, which use docker-compose to spin up a vault server and two separate k3s clusters. In case you bring your own Kubernetes clusters and an externally hosted Vault instance, skip ahead to the next section.

  • docker-compose to spin up a local environment (download)

In order to progress, we expect the following shell variables to be set according to your environment.

export K8S_API_SERVER_1=
export K8S_API_SERVER_2=

Vault Kubernetes Auth Backend

As mentioned in the introduction section on Vault server considerations, we will be using the Kubernetes auth backend. Since istiod will be fetching the certificates and private key material from the Vault server, let’s start off by creating the corresponding service accounts in both clusters.

kubectl --kubeconfig kubecfg1.yml create ns istio-system
kubectl --kubeconfig kubecfg2.yml create ns istio-system
kubectl --kubeconfig kubecfg1.yml apply -f istio-sa.yml
kubectl --kubeconfig kubecfg2.yml apply -f istio-sa.yml

ServiceAccount, Secret and ClusterRoleBinding as below:

# istio-sa.yaml
  apiVersion: v1
  kind: ServiceAccount
    name: istiod
    namespace: istio-system
    labels: # added for istio helm installation
      app: istiod Helm
      release: istio-istiod
    annotations: # added for istio helm installation istio-istiod istio-system
  apiVersion: v1
  kind: Secret
    name: istiod
    namespace: istio-system
    annotations: istiod
  kind: ClusterRoleBinding
    name: role-tokenreview-binding
    kind: ClusterRole
    name: system:auth-delegator
    - kind: ServiceAccount
      name: istiod
      namespace: istio-system

NOTE: We added Helm labels and annotations on the istiod ServiceAccount in order not to have conflicts with our Istio Helm deployment later on.

Once the ServiceAccount in both clusters is created, let’s store their Secret token and ca.cert values in an output folder:

mkdir -p ./output
kubectl --kubeconfig kubecfg1.yml get secret -n istio-system istiod -o go-template="{{ .data.token }}" | base64 --decode > output/istiod1.jwt
kubectl --kubeconfig kubecfg1.yml config view --raw --minify --flatten -o jsonpath="{.clusters[].cluster.certificate-authority-data}" | base64 --decode > output/k8sapi-cert1.pem
kubectl --kubeconfig kubecfg2.yml get secret -n istio-system istiod -o go-template="{{ .data.token }}" | base64 --decode > output/istiod2.jwt
kubectl --kubeconfig kubecfg2.yml config view --raw --minify --flatten -o jsonpath="{.clusters[].cluster.certificate-authority-data}" | base64 --decode > output/k8sapi-cert2.pem

More information on the detailed content of the Kubernetes API certificate and the istiod ServiceAccount JWT token can be found here, where we also describe the vault interaction process in more depth in terms of REST API calls made to authenticate and fetch secrets. These can come in handy when debugging permission denied issues.

Let’s create the necessary Vault auth configuration based on the Kubernetes CA certs and JWT tokens just retrieved:

export VAULT_ADDR=http://localhost:8200
vault login root
vault auth enable --path=kubernetes-cluster1 kubernetes
vault auth enable --path=kubernetes-cluster2 kubernetes
vault write auth/kubernetes-cluster1/config \
  kubernetes_host="$K8S_API_SERVER_1" \
  kubernetes_ca_cert=@output/k8sapi-cert1.pem \
  token_reviewer_jwt=`cat output/istiod1.jwt` \
vault write auth/kubernetes-cluster2/config \
  kubernetes_host="$K8S_API_SERVER_2" \
  kubernetes_ca_cert=@output/k8sapi-cert2.pem \
  token_reviewer_jwt=`cat output/istiod2.jwt` \

NOTE: VAULT_ADDR is set to localhost in case you are using the docker-compose provided environment. Set this to $VAULT_SERVER in case you brought your own Vault server.

Istio Certificates and Private Key in Vault kv Secrets

Next we will create a new self-signed root certificate and generate intermediate certificates for both our clusters. We will be using the helper Makefile scripts provided by upstream Istio here.

cd certs
make -f ../certs-gen/ root-ca
make -f ../certs-gen/ istiod-cluster1-cacerts
make -f ../certs-gen/ istiod-cluster2-cacerts
cd ..

More details on the actual content and the X509v3 extensions being set, can be found here. You can fine-tune the certificate generation, by the Makefile documentation here and corresponding Makefile override values.

Let’s add the generated certificates and private key into Vault kv secrets:

export VAULT_ADDR=http://localhost:8200
vault login root
vault secrets enable -path=kubernetes-cluster1-secrets kv
vault secrets enable -path=kubernetes-cluster2-secrets kv
vault kv put kubernetes-cluster1-secrets/istiod-service/certs \
  ca_key=@certs/istiod-cluster1/ca-key.pem \
  ca_cert=@certs/istiod-cluster1/ca-cert.pem \
  cert_chain=@certs/istiod-cluster1/cert-chain.pem \
vault kv put kubernetes-cluster2-secrets/istiod-service/certs \
  ca_key=@certs/istiod-cluster2/ca-key.pem \
  ca_cert=@certs/istiod-cluster2/ca-cert.pem \
  cert_chain=@certs/istiod-cluster2/cert-chain.pem \

Move on by restricting access to those certificates and private key per cluster, bound to the Kubernetes istiod ServiceAccount based auth backend:

echo 'path "kubernetes-cluster1-secrets/istiod-service/certs" {
  capabilities = ["read"]
}' | vault policy write istiod-certs-cluster1 -
echo 'path "kubernetes-cluster2-secrets/istiod-service/certs" {
  capabilities = ["read"]
}' | vault policy write istiod-certs-cluster2 -
vault write auth/kubernetes-cluster1/role/istiod \
  bound_service_account_names=istiod \
  bound_service_account_namespaces=istio-system \
  policies=istiod-certs-cluster1 \
vault write auth/kubernetes-cluster2/role/istiod \
  bound_service_account_names=istiod \
  bound_service_account_namespaces=istio-system \
  policies=istiod-certs-cluster2  \

Deploy vault-inject and Istio Helm Charts

In order to deploy the Vault injector, we will be leveraging the official Vault Helm charts.

helm repo add hashicorp
helm repo update
kubectl --kubeconfig kubecfg1.yml create ns vault
kubectl --kubeconfig kubecfg2.yml create ns vault
helm --kubeconfig kubecfg1.yml install -n vault vault-inject hashicorp/vault --set "injector.externalVaultAddr=$VAULT_SERVER"
helm --kubeconfig kubecfg2.yml install -n vault vault-inject hashicorp/vault --set "injector.externalVaultAddr=$VAULT_SERVER"
kubectl --kubeconfig kubecfg1.yml -n vault get pods
kubectl --kubeconfig kubecfg2.yml -n vault get pods
  NAME                                           READY   STATUS    RESTARTS   AGE
  vault-inject-agent-injector-5776975795-9vt9w   1/1     Running   0          92s
  NAME                                           READY   STATUS    RESTARTS   AGE
  vault-inject-agent-injector-5776975795-9vjnx   1/1     Running   0          91s

To install Istio, we will be using the Tetrate Istio Distro Helm charts.

helm repo add tetratelabs
helm repo update
helm --kubeconfig kubecfg1.yml install -n istio-system istio-base tetratelabs/base
helm --kubeconfig kubecfg2.yml install -n istio-system istio-base tetratelabs/base
helm --kubeconfig kubecfg1.yml install -n istio-system istio-istiod tetratelabs/istiod --values=./cluster1-values.yaml
helm --kubeconfig kubecfg2.yml install -n istio-system istio-istiod tetratelabs/istiod --values=./cluster2-values.yaml
kubectl --kubeconfig kubecfg1.yml -n istio-system get pods
kubectl --kubeconfig kubecfg2.yml -n istio-system get pods

Note how we leverage several Istio Helm chart value overrides to accomplish our desired goal:

  • Inject a pilot Pod environment variable ROOT_CA_DIR to tell istiod where to fetch certificates and private key
  • Tell the vault-agent-init container to run before the istiod container, so the secrets are available within the /vault/secrets mounted volume
  • Instruct the Vault injector to fetch secrets based on the correct location and data keys
  • Assume the Vault istiod role while doing so
  • Override the default Kubernetes auth-path, because we have multiple clusters
    ROOT_CA_DIR: /vault/secrets
  podAnnotations: "true" "true" "kubernetes-cluster1-secrets/istiod-service/certs" |
        {{- with secret "kubernetes-cluster1-secrets/istiod-service/certs" -}}
        {{ .Data.ca_key }}
        {{ end -}} "kubernetes-cluster1-secrets/istiod-service/certs" |
        {{- with secret "kubernetes-cluster1-secrets/istiod-service/certs" -}}
        {{ .Data.ca_cert }}
        {{ end -}} "kubernetes-cluster1-secrets/istiod-service/certs" |
        {{- with secret "kubernetes-cluster1-secrets/istiod-service/certs" -}}
        {{ .Data.root_cert }}
        {{ end -}} "kubernetes-cluster1-secrets/istiod-service/certs" |
        {{- with secret "kubernetes-cluster1-secrets/istiod-service/certs" -}}
        {{ .Data.cert_chain }}
        {{ end -}} "istiod" "auth/kubernetes-cluster1"

When we look at the vault-agent-init container traces, we should see something like this. Our control plane has correctly picked up the Vault injected secrets.

kubectl --kubeconfig kubecfg1.yml logs -n istio-system -l app=istiod -c vault-agent-init --tail=-1
==> Vault agent started! Log data will stream in below:

  ==> Vault agent configuration:

                      Cgo: disabled
                Log Level: info
                  Version: Vault v1.12.0, built 2022-10-10T18:14:33Z
              Version Sha: 558abfa75702b5dab4c98e86b802fb9aef43b0eb

  2022-11-18T11:01:21.398Z [INFO]  sink.file: creating file sink
  2022-11-18T11:01:21.398Z [INFO]  sink.file: file sink configured: path=/home/vault/.vault-token mode=-rw-r-----
  2022-11-18T11:01:21.398Z [INFO]  template.server: starting template server
  2022-11-18T11:01:21.398Z [INFO]  sink.server: starting sink server
  2022-11-18T11:01:21.398Z [INFO]  auth.handler: starting auth handler
  2022-11-18T11:01:21.398Z [INFO]  auth.handler: authenticating
  2022-11-18T11:01:21.398Z [INFO] (runner) creating new runner (dry: false, once: false)
  2022-11-18T11:01:21.398Z [INFO] (runner) creating watcher
  2022-11-18T11:01:21.402Z [INFO]  auth.handler: authentication successful, sending token to sinks
  2022-11-18T11:01:21.402Z [INFO]  auth.handler: starting renewal process
  2022-11-18T11:01:21.402Z [INFO]  sink.file: token written: path=/home/vault/.vault-token
  2022-11-18T11:01:21.402Z [INFO]  sink.server: sink server stopped
  2022-11-18T11:01:21.402Z [INFO]  sinks finished, exiting
  2022-11-18T11:01:21.402Z [INFO]  template.server: template server received new token
  2022-11-18T11:01:21.402Z [INFO] (runner) stopping
  2022-11-18T11:01:21.402Z [INFO] (runner) creating new runner (dry: false, once: false)
  2022-11-18T11:01:21.402Z [INFO] (runner) creating watcher
  2022-11-18T11:01:21.402Z [INFO] (runner) starting
  2022-11-18T11:01:21.403Z [INFO]  auth.handler: renewed auth token
  2022-11-18T11:01:21.515Z [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/root-cert.pem"
  2022-11-18T11:01:21.515Z [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/ca-cert.pem"
  2022-11-18T11:01:21.515Z [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/cert-chain.pem"
  2022-11-18T11:01:21.516Z [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/ca-key.pem"
  2022-11-18T11:01:21.516Z [INFO] (runner) stopping
  2022-11-18T11:01:21.516Z [INFO]  template.server: template server stopped
  2022-11-18T11:01:21.516Z [INFO] (runner) received finish
  2022-11-18T11:01:21.516Z [INFO]  auth.handler: shutdown triggered, stopping lifetime watcher
  2022-11-18T11:01:21.516Z [INFO]  auth.handler: auth handler stopped

When we look at the discovery container traces, we should see something like this:

kubectl --kubeconfig kubecfg1.yml logs -n istio-system -l app=istiod -c discovery --tail=-1
 info	Using istiod file format for signing ca files
  info	Use plugged-in cert at /vault/secrets/ca-key.pem
  info	x509 cert - Issuer: "CN=Intermediate CA,O=Istio,L=istiod-cluster1", Subject: "", SN: 39f67569f10d36a1fc91e9d82156b07d, NotBefore: "2022-11-18T11:11:59Z", NotAfter: "2032-11-15T11:13:59Z"
  info	x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Intermediate CA,O=Istio,L=istiod-cluster1", SN: dedf298a147681d6, NotBefore: "2022-11-17T22:01:54Z", NotAfter: "2024-11-16T22:01:54Z"
  info	x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Root CA,O=Istio", SN: f5bcd7e89bdb6248, NotBefore: "2022-11-17T22:01:52Z", NotAfter: "2032-11-14T22:01:52Z"
  info	Istiod certificates are reloaded
  info	spiffe	Added 1 certs to trust domain cluster.local in peer cert verifier

We can see that our Istio control plane has correctly picked up our Vault injected certificates and private key. Mission accomplished!


In this article, we have successfully bootstrapped the Istio control plane with external Vault stored certificates and private keys. The steps to achieve this included:

  • Storing the certificates and private key in per-cluster dedicated Vault secret mount paths
  • Setting up Kubernetes Vault auth backends per cluster, linked to the proper ServiceAccount
  • Defining a proper role and policy to allow access from the istiod ServiceAccount to the Vault secrets
  • Adjusting Istio Pilot bootstrap parameters to:
    • Inject the vault-agent-init sidecars
    • Fetch the correct vault secrets containing our certificates and private key
    • Use the right role and auth backend to do so
    • Pick up the certificates and private key from the correct vault secret mount path

We can use exactly the same technique to inject ingress-gateway and egress-gateway certificates. When creating Istio Gateway objects, make sure to point serverCertificate, privateKey and caCertificates to the correct files within the /vault/secrets mounted volume. We’ll leave this as an exercise for the reader.

By tying our certificate injection to Kubernetes ServiceAccount, we have now delegated certificate lifecycle management to an external secret Vault instance. External processes, like a service portal or a CI/CD pipeline, can now be created with dedicated roles and write/update policies, to provide the necessary certificate lifecycle management security.


If you’re new to service mesh and Kubernetes security, we have a bunch of free online courses available at Tetrate Academy that will quickly get you up to speed with Istio and Envoy.If you’re looking for a fast way to get to production with Istio, check out Tetrate Istio Distribution (TID). TID is Tetrate’s hardened, fully upstream Istio distribution, with FIPS-verified builds and support available. It’s a great way to get started with Istio knowing you have a trusted distribution to begin with, have an expert team supporting you, and also have the option to get to FIPS compliance quickly if you need to.Once you have Istio up and running, you will probably need simpler ways to manage and secure your services beyond what’s available in Istio, that’s where Tetrate Service Bridge comes in. You can learn more about how Tetrate Service Bridge makes service mesh more secure, manageable, and resilient here, or contact us for a quick demo.