Exposing services hosted within a Kubernetes cluster for use by external users and other IT systems requires deliberate actions. This is because a cluster’s default configuration is a private network. Kubernetes Ingress is the mechanism used to present services and applications externally from within a cluster.
Ingress delivery in Kubernetes has changed significantly over time. In this post, we’ll outline what Ingress does and how it has changed over time to today, where the Envoy Gateway is the leading choice to deliver Ingress services.
What Is Ingress in Kubernetes?
And how has it been done in the past?
The first iterations of Ingress in Kubernetes exposed selected services to external clients using a load-balancing proxy known as an Ingress Controller. Typically, the load balancing engine was Nginx or Envoy and was managed by a Kubernetes Controller using Kubernetes Ingress Resources. This early configuration enabled the proxy to present services via an externally accessible IP address and Port number that could send and receive network traffic that the Kubernetes cluster could route to the correct containers.
As Kubernetes deployments grew and became more complex, the uses placed on Ingress Controllers expanded to include enhanced security, API Gateway requirements, better traffic management, and observability into what was happening within clusters and in communications with external users. These increased demands on the Ingress Controllers led to limitations coming to light with the design and implementation.
What Are the Limitations of Legacy Kubernetes Ingress?
The initial solutions to deliver Kubernetes Ingress did not follow any standard approaches. While they all used the Ingress resources in Kubernetes, they implemented how they provided Ingress in different and incompatible ways. This meant anyone looking to build container-based application deployment models based on Kubernetes management had to evaluate the various Ingress solutions available and pick one.
The absence of a clear choice was a problem. As organizations evaluated functionality, community support, maintenance, portability, and integration into complex configuration pipelines, it was obvious that different teams would pick different solutions. This meant that a wide variety of incompatible Kubernetes deployments made up the landscape—not ideal for a platform meant to simplify the deployment and management of containerized applications.
Over time, it became clear that a streamlined and standardized approach to Kubernetes Ingress was needed. The Kubernetes Gateway API and the development of Envoy Gateway as a standard implementation of the Gateway API based on the widely deployed Envoy Proxy were a response to this need.
How Does the Gateway API Plus Envoy Gateway Improve on Legacy Kubernetes Ingress?
The introduction of the Gateway API as a replacement for the aging and struggling Ingress resources was a watershed moment for Kubernetes Ingress administration. Gateway API retained the core functionality needed to expose services to external entities, but it also included modern capabilities required to support multi-tenant deployments built on role-based usage models. The expanded API and functionality included in Gateway API addressed many of the limitations of its predecessor.
As the Gateway API was replacing Ingress Resources, the Envoy Gateway Project emerged to combine the expressiveness of Gateway API with the performance and reliability of Envoy.
The Envoy Gateway Project resulted in a powerful tool that provides organizations needing Kubernetes Ingress with a range of benefits, including better security, traffic shaping, and API-style control.
Envoy Gateway is now the standard Ingress choice for Kubernetes deployments. Built on the open-source Envoy proxy, it offers a wide range of third-party integrations and extensive capabilities that work in compatible ways across suppliers without the risk of vendor lock-in.