Background
When Kubernetes was launched in June 2014, only NodePort and LoadBalancer-type Service objects were available to expose services within the cluster to the outside world. Later, Ingress was introduced to offer more control over incoming traffic.. To preserve its portability and lightweight design, the Ingress API matured more slowly than other Kubernetes APIs; it was not upgraded to GA until Kubernetes 1.19.
Ingress’ primary objective is to expose HTTP applications using a straightforward declarative syntax. When creating an Ingress or setting a default IngressClass in Kubernetes, you can deploy several Ingress Controllers and define the controller the gateway uses via IngressClass. Kubernetes currently supports only AWS, GCE, and NGINX Ingress controllers by default; many third-party ingress controllers are also supported.
Kubernetes Ingress Diagram
The following diagram illustrates the workflow of Kubernetes Ingress:
Figure 1: Kubernetes Ingress workflow diagram.
The detailed process is as follows:
- Kubernetes cluster administrators deploy an Ingress Controller in Kubernetes.
- The Ingress Controller continuously monitors changes to IngressClass and Ingress objects in the Kubernetes API Server.
- Administrators apply IngressClass and Ingress to deploy the gateway.
- Ingress Controller creates the corresponding ingress gateway and configures the routing rules according to the administrator’s configuration.
- If in the cloud, the client accesses the load balancer for that ingress gateway.
- The gateway will route the traffic to the corresponding back-end service based on the host and path in the HTTP request.
Limitations of Kubernetes Ingress
Although IngressClass decouples the ingress gateway from the back-end implementation, it still has significant limitations.
- Ingress is too simple for most real-world use and it only supports HTTP protocol routing.
- It only supports host and path matching, and there is no standard configuration for advanced routing features, which can only be achieved through annotation, such as URL redirection using Nginx Ingress Controller, which requires configuration of nginx.ingress.kubernetes.io/rewrite-target annotation, which is no longer adaptable to the needs of a programmable proxy.
- The situation where services in different namespaces must be bound to the same gateway often arises in practical situations where the ingress gateway cannot be shared across multiple namespaces.
- No delineation of responsibilities for creating and managing ingress gateways, resulting in developers having to not only configure gateway routes but also create and manage gateways themselves.
Kubernetes Gateway API
The Gateway API is a collection of API resources: GatewayClass, Gateway, HTTPRoute, TCPRoute, ReferenceGrant, etc. The Gateway API exposes a more generic proxy API that can be used for more protocols than HTTP and models more infrastructure components, providing better deployment and management options for cluster operations.
In addition, the Gateway API achieves configuration decoupling by separating resource objects that people can manage in different roles. The following diagram shows the roles and objects in the Gateway API:
Figure 2: Kubernetes Gateway API.
Kubernetes Ingress Example
The following is an example of using the Gateway API in Istio:
apiVersion: gateway.networking.k8s.io/v1alpha2 kind: Gateway metadata: name: gateway namespace: istio-ingress spec: gatewayClassName: istio listeners: - name: default hostname: "*.example.com" port: 80 protocol: HTTP allowedRoutes: namespaces: from: All --- apiVersion: gateway.networking.k8s.io/v1alpha2 kind: HTTPRoute metadata: name: http namespace: default spec: parentRefs: - name: gateway namespace: istio-ingress hostnames: ["httpbin.example.com"] rules: - matches: - path: type: PathPrefix value: / backendRefs: - name: httpbin port: 8000
Similar to Ingress, Gateway uses gatewayClassName to declare the controller it uses, which needs to be created by the platform administrator and allows client requests for the *.example.com domain. Application developers can create routing rules in the namespace where their service resides, in this case, default, and bind to the Gateway via parentRefs, but only if the Gateway explicitly allows them to do so (via the rules set in the allowRoutes field).
When you apply the above configuration, Istio will automatically create a load-balancing gateway for you. The following diagram shows the workflow of the Gateway API:
Figure 3: Gateway API workflow.
The detailed process is as follows:
- The infrastructure provider provides GatewayClass and Gateway Controller.
- Platform operator deploy Gateway (multiple deployments possible, or using different GatewayClasses).
- Gateway Controller continuously monitors changes to the GatewayClass and Gateway objects in the Kubernetes API Server.
- Gateway controller will create the corresponding gateway based on cluster operations and maintenance configuration.
- Application developers apply xRoutes and bind them to the service.
- If in the cloud, the client accesses the load balancer for that ingress gateway.
- The gateway will route to the corresponding back-end service based on the matching criteria in the traffic request.
From the above steps, we can see that the Gateway API has a clear division of roles compared to Ingress and that routing rules can be decoupled from the gateway configuration, significantly increasing management flexibility.
The following diagram shows the route flow after it is accessed at the gateway and processed:
Figure 4: Gateway route flow.
From this figure, we can see that the route is bound to the gateway. The route is generally deployed in the same namespace as its backend services. Suppose the route is in a different namespace, and you need to explicitly give the route cross-namespace reference rights in ReferenceGrant, for example. In that case, the following HTTPRoute foo in the foo namespace can refer to the bar namespace bar service in the bar namespace.
kind: HTTPRoute metadata: name: foo namespace: foo spec: rules: - matches: - path: /bar forwardTo: backend: - name: bar namespace: bar --- kind: ReferenceGrant metadata: name: bar namespace: bar spec: from: - group: networking.gateway.k8s.io kind: HTTPRoute namespace: foo to: - group: "" kind: Service
Envoy Gateway
Envoy Gateway is the most advanced implementation of the Gateway API, using Envoy Proxy as an API gateway to deliver a simplified deployment model and an API layer aimed at lighter use cases. It was created based on the Envoy proxy’s compliance with the Gateway API, of which Tetrate is a core sponsor. Its expressive, scalable, role-oriented API design through ingress and L4/L7 traffic routing makes it the foundation for vendors to build value-added API gateway products.
Long before Envoy Gateway was released, Envoy was massively adopted as one of the most popular cloud-native proxies, with several Gateway software builds based on Envoy, and the Istio service mesh used it as the default sidecar proxy and configured these distributed proxies via the xDS protocol. Envoy Gateway also uses xDS to configure the Envoy fleet. The following diagram illustrates the architecture of Envoy Gateway.
Figure 5: Envoy Gateway architecture.
The infrastructure provider will provide you with GatewayGlass. You can create an Envoy Gateway by creating a Gateway declaration. Your routing and policy attachments in the Gateway will be sent to the Envoy fleet via the xDS protocol.
Get Started with Gateway API Using Envoy Gateway
Getting started with Gateway API and Envoy Gateway is easy. Go to the documentation on the Envoy Gateway project site and follow the instructions in the “quick start” tutorial. Once you have it up and running, you can take it through its paces with the comprehensive set of tutorials on Envoy Gateway’s features in the user documentation.