
Today we announced the release of Tetrate Enterprise Envoy Gateway (TEG), the enterprise-ready distribution of Envoy Gateway. In this article, we’ll show you how to get TEG up and running yourself in 15 minutes. Running TEG does not require any expertise with Envoy proxy. Due to the much simpler Kubernetes Gateway API and Envoy Gateway control plane, you will feel right at home when deploying gateways and registering routes to your services. The API ideally lends itself to idempotent GitOps-driven configuration workflows and ingress gateway deployments. Beyond the initial installation and onboarding, achieving additional security and resiliency measures should take only a few more steps.
Installation
You can install TEG through a simple helm chart instal. Check out the TEG installation guide for more details.
To make it easy to kick the tires, TEG also comes distributed as a demonstration helm chart. Next to TEG, the demonstration helm chart installs a full Prometheus + Grafana observability stack with ephemeral storage. Installing this version is as simple as:
helm install teg oci://docker.io/tetrate/teg-demo-helm \
--version v0.1.1 \
-n envoy-gateway-system --create-namespace
After a little while the following components will be deployed:
kubectl get pod -n envoy-gateway-system
NAME READY STATUS RESTARTS AGE
envoy-default-eg-e41e7b31-7c69cc7f85-ggqgd 1/1 Running 0 80s
envoy-gateway-74cb6b457d-j6n6k 2/2 Running 0 72s
envoy-ratelimit-6dd7cbdb8f-k77qs 1/1 Running 0 70s
grafana-6ffdc69f4b-5jx9x 2/2 Running 0 87s
loki-0 1/1 Running 0 85s
loki-canary-d5d4x 1/1 Running 0 87s
loki-canary-sr7q5 1/1 Running 0 87s
loki-canary-vhxcp 1/1 Running 0 87s
loki-gateway-5cf44975fb-24xf5 1/1 Running 0 87s
loki-logs-4xftd 2/2 Running 0 77s
loki-logs-5l2x5 2/2 Running 0 77s
loki-logs-sc5sr 2/2 Running 0 77s
otel-collector-586b94b974-nkq25 1/1 Running 0 87s
prometheus-76698d9d56-tg7lt 2/2 Running 0 86s
teg-envoy-gateway-59458d7b4d-j2n55 1/1 Running 0 86s
teg-grafana-agent-operator-665fb9b9b-cc7x8 1/1 Running 0 87s
teg-redis-55bf8c7648-fcd6r 1/1 Running 0 86s
tempo-0 1/1 Running 0 85s
Deploy an App
Use your favorite workflow to deploy your app on Kubernetes. TEG does not impose limits on how you manage your application’s lifecycles. It can actually augment and enhance your application configuration experience by allowing application teams themselves to configure and deploy their own per-app ingress gateways as part of the overall app deployment strategy. This decouples the need to have application teams reaching out to the platform team for ingress route configuration changes.
In the next release of TEG we will enable the tiered ingress (ingress of ingresses) use case which means that a cluster of Tier 1 (edge) TEG gateways fully managed by the platform team can be the point of ingress for all applications, handling certificates and TLS termination, while application teams can have their own per-app (or shared) Tier 2 ingresses handling API to service route management. The Tier 1 ingress will automatically discover and route incoming traffic to the correct application Tier 2 gateways. If TEG is deployed in multiple clusters, this also unlocks very powerful automated failover scenarios that can respond much faster than DNS changes can.
As an example we’ll deploy the httpbin application commonly used in tutorials:
kubectl create namespace httpbin
kubectl apply -n httpbin -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml
Deploy a TEG App Ingress
With the httpbin service deployed in the httpbin namespace, it is time to deploy and configure an Envoy proxy to become the application’s ingress. Deployment of the gateway and configuration of the listener is done through the Gateway resource as follows:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: dedicated-gateway
namespace: httpbin
spec:
gatewayClassName: teg
listeners:
- name: http
protocol: HTTP
port: 80
With the above Gateway configuration applied, you’ll see the dedicated Envoy Gateway show up in the envoy-gateway-system
namespace.
kubectl get pods -n envoy-gateway-system \
-l gateway.envoyproxy.io/owning-gateway-namespace=httpbin
NAME READY STATUS RESTARTS AGE
envoy-httpbin-dedicated-gateway-c4239473-55fd46c5c-pr6bp 1/1 Running 0 23s
The gateway is listening for traffic coming from outside the cluster on port 80.
kubectl get gateway -n httpbin
NAME CLASS ADDRESS PROGRAMMED AGE
dedicated-gateway teg 35.238.21.86 True 2m28s
kubectl get svc -n envoy-gateway-system \
-l gateway.envoyproxy.io/owning-gateway-namespace=httpbin
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
envoy-httpbin-dedicated-gateway-c4239473 LoadBalancer 10.0.7.31 35.238.21.86 80:31583/TCP 5m17s
The proxy is running and listening on port 80, but we still need to tell TEG which services it needs to send requests to. We do this by providing HTTPRoute objects.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: httpbin
namespace: httpbin
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: dedicated-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /httpbin/
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
backendRefs:
- group: ""
kind: Service
name: httpbin
port: 8000
Once this is applied, Envoy Gateway will start to route requests to the httpbin service we deployed earlier.
You can test by making a request to httpbin:
export DEDICATED_GATEWAY_IP=$(kubectl get service -n envoy-gateway-system -l gateway.envoyproxy.io/owning-gateway-namespace=httpbin -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
If your provider does not provide an ingress IP address, but rather a hostname (e.g., AWS) use the following:
export DEDICATED_GATEWAY_IP=$(kubectl get service -n envoy-gateway-system -l gateway.envoyproxy.io/owning-gateway-namespace=httpbin -o jsonpath='{.items[0].status.loadBalancer.ingress[0].hostname}')
Now we can make a request:
curl -i http://${DEDICATED_GATEWAY_IP}/httpbin/get
You should see output similar to:
HTTP/1.1 200 OK
server: envoy
date: Mon, 04 Sep 2023 15:15:32 GMT
content-type: application/json
content-length: 339
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 3
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "104.196.111.251",
"User-Agent": "curl/8.1.2",
"X-Envoy-Expected-Rq-Timeout-Ms": "15000",
"X-Envoy-External-Address": "99.254.151.222",
"X-Envoy-Original-Path": "/httpbin/get"
},
"origin": "99.254.151.222",
"url": "http://104.196.111.251/get"
}
Set up Rate Limiting
Setting up rate limits involves two steps. First, we need to configure a rate limiting policy.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: RateLimitFilter
metadata:
namespace: httpbin
name: ratelimit-1hz
spec:
type: Global
global:
rules:
- limit:
requests: 1
unit: Second
Once the policy is deployed, we can update our HTTPRoute to use it. This is done by adding another filter to the rule object that handles traffic.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: httpbin
namespace: httpbin
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: dedicated-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /httpbin/
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: RateLimitFilter
name: ratelimit-1hz
backendRefs:
- group: ""
kind: Service
name: httpbin
port: 8000
A single call to the service will go fine. But if you quickly trigger more requests, you will start to see 429 Too Many Requests errors happen. Try it out with the following command:
curl -i http://$DEDICATED_GATEWAY_IP/httpbin/get ; echo “===” ; \
curl -i http://$DEDICATED_GATEWAY_IP/httpbin/get ;
HTTP/1.1 200 OK
server: envoy
date: Thu, 14 Sep 2023 18:39:09 GMT
content-type: application/json
content-length: 333
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 2
x-ratelimit-limit: 1, 1;w=1
x-ratelimit-remaining: 0
x-ratelimit-reset: 1
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "35.238.21.86",
"User-Agent": "curl/8.1.2",
"X-Envoy-Expected-Rq-Timeout-Ms": "15000",
"X-Envoy-External-Address": "82.172.136.247",
"X-Envoy-Original-Path": "/httpbin/get"
},
"origin": "82.172.136.247",
"url": "http://35.238.21.86/get"
}
===
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
x-ratelimit-limit: 1, 1;w=1
x-ratelimit-remaining: 0
x-ratelimit-reset: 1
date: Thu, 14 Sep 2023 18:39:08 GMT
server: envoy
content-length: 0
Get Insights through Your Preferred Observability Stack
One of the primary goals of Envoy is to make the network understandable. As such Envoy Proxy as well as the Envoy Gateway control plane provide extensive support for observability ecosystems. A large number of statistics is available through various interfaces (Prometheus, Statsd, and OpenTelemetry OTLP) as well as out of the box support for distributed tracing through OpenTelemetry and Zipkin.
With the demo helm install, TEG bundles a fully loaded Prometheus + Grafana stack to immediately test drive the observability capabilities. To connect to the Grafana UI you can port-forward to the internal service like this:
kubectl port-forward -n envoy-gateway-system deployment/grafana 3000
Now, you can open your web browser to the following address: http://localhost:3000. If presented with a login screen use the default Grafana credentials (username: admin, password: admin). From here you can select one of the preconfigured dashboards:



Try TEG Now
That’s the quick overview of major TEG capabilities. If you want a comprehensive conceptual introduction to TEG, you can read more about TEG on its documentation site.