Istio 可以和其他 Ingress 代理一起使用吗?
这是一个常见问题。简而言之,可以!
对于许多想要采用 Istio 的工程师来说,一个长期存在的问题是,他们希望利用 Istio 可提供的众多好处,包括从整体上解决遥测问题、安全和传输问题以及策略问题。他们按照对网格的期望重新设计了自己的整个服务,因此需要网格的各方面也与他们相一致。
具体到双向认证(mTLS),工程师想利用它在应用间提供的加密通信 / 安全性。mTLS 要求通信双方都证明自己的身份,提供了更高的安全性。因此工程师想使用它也不足为奇。但是,如果存在已建立好的 Ingress 代理,例如 NGINX 或 HAProxy,工程师们希望在不提供 Citadel 的 Ingress 证书的情况下保留这些代理(Citadel 虽然可以通过多种方式进行定制,但还没有提供一种简单的方法来实现这一点)。这是可以做到的。
怎么做
常见的方法是用 Istio sidecar 运行 Ingress 代理,sidecar 可以处理来自 Citadel 的证书 / 身份并在网格中执行 mTLS。
有些人困惑于如何正确配置 Ingress 代理,但实际上可以简单地完成配置。需要哪些配置取决于 Ingress 的部署方式。在 Kubernetes 中部署 Ingress 代理有三种常见的方式。第一种方法是为每个组(在 Kubernetes 中指每个命名空间的一组 Ingress 实例)运行专用的 Ingress 实例。第二种方法是横跨多组(命名空间)共享一组 Ingress 实例。第三种是混合方法,即每组有一个横跨他们拥有的多个命名空间的专用 Ingress 实例,或者某个组织既拥有具有专用 Ingress 实例的组又拥有使用共享 Ingress 实例的组,或者以上所有情况的混合。这三种方法可以被归纳为我们需要在 Istio 中配置的两种情况:
无论用哪一种架构部署 Ingress,我们都要用同一套工具解决问题。我们需要:
- 用 Envoy sidecar 部署 Ingress 代理。使用以下注解进行 Ingress 部署。
sidecar.istio.io/inject: 'true'
- 由于我们想让 ingress 代理来处理 inbound 流量,用以下注解使它免于经过 sidecar。
traffic.sidecar.istio.io/includeInboundPorts: "" traffic.sidecar.istio.io/excludeInboundPorts: "80,443"
用自己部署时暴露的 Ingress 端口代替以上端口。
- 如果 Ingress 代理需要与 Kubernetes API 服务器交互(例如因为 Ingress 控制器被绑定到 Ingress pod,像 NGINX Ingress 一样),那么就需要在无 sidecar 干预的情况下调用 Kubernetes API 服务器。
traffic.sidecar.istio.io/excludeOutboundIPRanges: "1.1.1.1/24,2.2.2.2/16,3.3.3.3/20"
用自己的 Kubernetes API 服务器的 IP 地址范围代替以上值,通过以下命令找到 IP 地址:
kubectl get svc kubernetes -o jsonpath='{.spec.clusterIP}'
- 与其将 outbound 流量发送到 NGINX upstream 配置里的端点列表,不如配置 NGINX Ingress 将流量发送到一个单独的上游服务,这样 outbound 流量就会被 Istio sidecar 拦截。将以下注释添加到每个 Ingress 资源中。
kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/service-upstream: "true" nginx.ingress.kubernetes.io/upstream-vhost: httpbin.default.svc.cluster.local
- 配置 Ingress 的 sidecar,将流量发送到网格中的服务。这是在每种部署方式中都会改变的关键部分。对于与流量的目标服务在同一命名空间的那些 Ingress,不需要对 Ingress 进行额外的配置。命名空间中的 sidecar 会自动知道如何将流量到送到相同命名空间的服务。对于与流量的目标服务不在相同命名空间的 Ingress,需要在 Ingress 的命名空间里编写一个 Isito Sidecar API 对象,它允许将流量 Egress 到 Ingress 发向的服务。
apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: ingress namespace: ingress-namespace spec: egress: - hosts: # only the frontend service in the prod-us1 namespace - "prod-us1/frontend.prod-us1.svc.cluster.local" # any service in the prod-apis namespace - "prod-apis/*" # tripping hazard: make sure you include istio-system! - "istio-system/*"
代入 Ingress 将流量发向的服务或命名空间。确保包含”istio-system/*”,否则 sidecar 将无法和控制平面交互(这是 Istio1.4.x 版本的临时要求,在未来的版本中应该会被修正)。
将所有的配置应用到 Kubernetes 后(详情参见下面的完整示例),我们有了如下的部署图。我们可以向集群中的服务发送一些 curl
以验证流量是否流经 Envoy 且 mTLS 是否被强制执行。
- curl 经由与应用程序在相同的命名空间中的 Ingress
curl $(kubectl get svc -n default ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/ip -v
- curl 经由与应用程序在不同命名空间的 Ingress
curl $(kubectl get svc -n ingress ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/ip -v
- curl 经由在其他命名空间中没有 sidecar 的 pod
kubectl exec -it$(kubectl get pod -n legacy -l app=sleep -o jsonpath='{.items[0].metadata.name}') -n legacy -- curl httpbin.default.svc.cluster.local:8000/ip -v
详细说明
- 创建集群
gcloud container clusters create -m n1-standard-2 ingress-test
- 下载 Istio 1.4.2
export ISTIO_VERSION=1.4.2; curl -L https://istio.io/downloadIstio | sh -
- 部署 Istio(带 mTLS 的部署演示)
./istio-1.4.2/bin/istioctl manifest apply \ --set values.global.mtls.enabled=true \ --set values.global.controlPlaneSecurityEnabled=true
- 给名为 default 的命名空间添加 label 以实现自动 sidecar 注入,部署 httpbin
kubectl label namespace default istio-injection=enabled kubectl apply -f ./istio-1.4.2/samples/httpbin/httpbin.yaml
- 部署 NGINX Ingress 控制器并且配置到名为 default 的命名空间里
export KUBE_API_SERVER_IP=$(kubectl get svc kubernetes -o jsonpath='{.spec.clusterIP}')/32 sed "s#__KUBE_API_SERVER_IP__#${KUBE_API_SERVER_IP}#" nginx-default-ns.yaml | kubectl apply -f -
nginx 的配置见原文
# nginx-default-ns.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress namespace: default annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/service-upstream: "true" nginx.ingress.kubernetes.io/upstream-vhost: httpbin.default.svc.cluster.local spec: backend: serviceName: httpbin servicePort: 8000 --- # Deployment: nginx-ingress-controller apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: default spec: replicas: 1 selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' # Do not redirect inbound traffic to Envoy. traffic.sidecar.istio.io/includeInboundPorts: "" traffic.sidecar.istio.io/excludeInboundPorts: "80,443" # Exclude outbound traffic to kubernetes master from redirection. traffic.sidecar.istio.io/excludeOutboundIPRanges: __KUBE_API_SERVER_IP__ sidecar.istio.io/inject: 'true' spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0 securityContext: runAsUser: 0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/nginx-default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/nginx-tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/nginx-udp-services - --annotations-prefix=nginx.ingress.kubernetes.io - --v=10 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 8 initialDelaySeconds: 15 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 httpGet: path: /healthz port: 10254 scheme: HTTP readinessProbe: failureThreshold: 8 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 httpGet: path: /healthz port: 10254 scheme: HTTP --- # Service: ingress-nginx apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: default labels: app: ingress-nginx spec: type: LoadBalancer selector: app: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-configuration namespace: default labels: app: ingress-nginx data: ssl-redirect: "false" --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-tcp-services namespace: default labels: app: ingress-nginx --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-udp-services namespace: default labels: app: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: default --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole namespace: default rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: default rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding namespace: default roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: default --- # Deployment: nginx-default-http-backend apiVersion: apps/v1 kind: Deployment metadata: name: nginx-default-http-backend namespace: default labels: app: nginx-default-http-backend spec: replicas: 1 selector: matchLabels: app: nginx-default-http-backend template: metadata: labels: app: nginx-default-http-backend # rewrite kubelet's probe request to pilot agent to prevent health check failure under mtls annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: terminationGracePeriodSeconds: 60 containers: - name: backend # Any image is permissible as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.4 securityContext: runAsUser: 0 ports: - name: http containerPort: 8080 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- # Service: nginx-default-http-backend apiVersion: v1 kind: Service metadata: name: nginx-default-http-backend namespace: default labels: app: nginx-default-http-backend spec: ports: - name: http port: 80 targetPort: http selector: app: nginx-default-http-backend ---
- 部署 NGINX Ingress 控制器并且配置到名为 ingress 的命名空间里
#" nginx-ingress-ns.yaml | kubectl apply -n ingress -f - '>
kubectl create namespace ingress kubectl label namespace ingress istio-injection=enabled sed "s#__KUBE_API_SERVER_IP__#${KUBE_API_SERVER_IP}#" nginx-ingress-ns.yaml | kubectl apply -n ingress -f -
Ingress 配置文件见原文
# nginx-ingress-ns.yaml # Deployment: nginx-ingress-controller apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress spec: replicas: 1 selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx annotations: prometheus.io/port: '10254' prometheus.io/scrape: 'true' # Do not redirect inbound traffic to Envoy. traffic.sidecar.istio.io/includeInboundPorts: "" traffic.sidecar.istio.io/excludeInboundPorts: "80,443" # Exclude outbound traffic to kubernetes master from redirection. traffic.sidecar.istio.io/excludeOutboundIPRanges: __KUBE_API_SERVER_IP__ sidecar.istio.io/inject: 'true' spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.12.0 securityContext: runAsUser: 0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/nginx-default-http-backend - --configmap=$(POD_NAMESPACE)/nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE)/nginx-tcp-services - --udp-services-configmap=$(POD_NAMESPACE)/nginx-udp-services - --annotations-prefix=nginx.ingress.kubernetes.io - --v=10 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 8 initialDelaySeconds: 15 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 httpGet: path: /healthz port: 10254 scheme: HTTP readinessProbe: failureThreshold: 8 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 httpGet: path: /healthz port: 10254 scheme: HTTP --- # Service: ingress-nginx apiVersion: v1 kind: Service metadata: name: ingress-nginx namespace: ingress labels: app: ingress-nginx spec: type: LoadBalancer selector: app: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-configuration namespace: ingress labels: app: ingress-nginx data: ssl-redirect: "false" --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-tcp-services namespace: ingress labels: app: ingress-nginx --- apiVersion: v1 kind: ConfigMap metadata: name: nginx-udp-services namespace: ingress labels: app: ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole namespace: ingress rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding namespace: ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: ingress --- # Deployment: nginx-default-http-backend apiVersion: apps/v1 kind: Deployment metadata: name: nginx-default-http-backend namespace: ingress labels: app: nginx-default-http-backend spec: replicas: 1 selector: matchLabels: app: nginx-default-http-backend template: metadata: labels: app: nginx-default-http-backend # rewrite kubelet's probe request to pilot agent to prevent health check failure under mtls annotations: sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: terminationGracePeriodSeconds: 60 containers: - name: backend # Any image is permissible as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.4 securityContext: runAsUser: 0 ports: - name: http containerPort: 8080 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- # Service: nginx-default-http-backend apiVersion: v1 kind: Service metadata: name: nginx-default-http-backend namespace: ingress labels: app: nginx-default-http-backend spec: ports: - name: http port: 80 targetPort: http selector: app: nginx-default-http-backend ---
- 创建路由到 httpbin 的 Ingress 资源
kubectl apply -f ingress-ingress-ns.yaml
# ingress-ingress-ns.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress namespace: ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/service-upstream: "true" nginx.ingress.kubernetes.io/upstream-vhost: httpbin.default.svc.cluster.local spec: backend: serviceName: httpbin servicePort: 8000
- 创建允许流量从 ingress 命名空间流向 default 命名空间的 sidecar 资源
kubectl apply -f sidecar-ingress-ns.yaml
# sidecar-ingress-ns.yaml apiVersion: networking.istio.io/v1alpha3 kind: Sidecar metadata: name: ingress namespace: ingress spec: egress: - hosts: - "default/*" # tripping hazard: make sure you include istio-system! - "istio-system/*"
- 验证外部流量可以通过在 default 和 ingress 命名空间里的 nginx ingress 发送到 httpbin 验证流量进入 default 命名空间里的 nginx ingress
curl $(kubectl get svc -n default ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/ip -v
b. 验证流量进入 ingress 命名空间里的 nginx ingress
curl $(kubectl get svc -n ingress ingress-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')/ip
c. 以上两个请求的预期响应应该看起来像这样
* Trying 34.83.167.92... * TCP_NODELAY set * Connected to 34.83.167.92 (34.83.167.92) port 80 (#0) > GET /ip HTTP/1.1 > Host: 34.83.167.92 > User-Agent: curl/7.58.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.13.9 < Date: Mon, 17 Feb 2020 21:06:18 GMT < Content-Type: application/json < Content-Length: 30 < Connection: keep-alive < access-control-allow-origin: * < access-control-allow-credentials: true < x-envoy-upstream-service-time: 2 < { "origin": "10.138.0.13" } * Connection #0 to host 34.83.167.92 left intact
- 验证 httpbin 服务没有收到明文的流量 在 shell 中执行以下命令
kubectl create namespace legacy kubectl apply -f ./istio-1.4.2/samples/sleep/sleep.yaml -n legacy kubectl exec -it $(kubectl get pod -n legacy -l app=sleep -o jsonpath='{.items[0].metadata.name}') -n legacy -- curl httpbin.default.svc.cluster.local:8000/ip -v
b. 以下是预期的输出
* Expire in 0 ms for 6 (transfer 0x55d92c811680) * Expire in 15 ms for 1 (transfer 0x55dc9cca6680) * Trying 10.15.247.45... * TCP_NODELAY set * Expire in 200 ms for 4 (transfer 0x55dc9cca6680) * Connected to httpbin.default.svc.cluster.local (10.15.247.45) port 8000 (#0) > GET /ip HTTP/1.1 > Host: httpbin.default.svc.cluster.local:8000 > User-Agent: curl/7.64.0 > Accept: */* > * Recv failure: Connection reset by peer * Closing connection 0 curl: (56) Recv failure: Connection reset by peer command terminated with exit code 56
- 验证 nginx ingress 控制器和 httpbin 服务间的连接是 mtls enabled 的 使用 istioctl 命令行界面来验证认证策略
./istio-1.4.2/bin/istioctl authn tls-check $(kubectl get pod -n default -l app=ingress-nginx -o jsonpath='{.items[0].metadata.name}') httpbin.default.svc.cluster.local ./istio-1.4.2/bin/istioctl authn tls-check -n ingress $(kubectl get pod -n ingress -l app=ingress-nginx -o jsonpath='{.items[0].metadata.name}') httpbin.default.svc.cluster.local
b. 两个 nginx ingress 的预期输出日志如下
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE httpbin.default.svc.cluster.local:8000 OK STRICT ISTIO_MUTUAL /default istio-system/default
This article was written by Tetrate’s Hengyuan Jiang and Zack Butcher, and edited by Tia Louden.