This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Tasks

Learn Envoy Gateway hands-on through tasks

1 - Quickstart

Get started with Envoy Gateway in a few simple steps.

This “quick start” will help you get started with Envoy Gateway in a few simple steps.

Prerequisites

A Kubernetes cluster.

Note: Refer to the Compatibility Matrix for supported Kubernetes versions.

Note: In case your Kubernetes cluster does not have a LoadBalancer implementation, we recommend installing one so the Gateway resource has an Address associated with it. We recommend using MetalLB.

Note: For Mac user, you need install and run Docker Mac Net Connect to make the Docker network work.

Installation

Install the Gateway API CRDs and Envoy Gateway:

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.1.2 -n envoy-gateway-system --create-namespace

Wait for Envoy Gateway to become available:

kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available

Install the GatewayClass, Gateway, HTTPRoute and example app:

kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/v1.1.2/quickstart.yaml -n default

Note: quickstart.yaml defines that Envoy Gateway will listen for traffic on port 80 on its globally-routable IP address, to make it easy to use browsers to test Envoy Gateway. When Envoy Gateway sees that its Listener is using a privileged port (<1024), it will map this internally to an unprivileged port, so that Envoy Gateway doesn’t need additional privileges. It’s important to be aware of this mapping, since you may need to take it into consideration when debugging.

Testing the Configuration

You can also test the same functionality by sending traffic to the External IP. To get the external IP of the Envoy service, run:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

In certain environments, the load balancer may be exposed using a hostname, instead of an IP address. If so, replace ip in the above command with hostname.

Curl the example app through Envoy proxy:

curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8888:80 &

Curl the example app through Envoy proxy:

curl --verbose --header "Host: www.example.com" http://localhost:8888/get

What to explore next?

In this quickstart, you have:

  • Installed Envoy Gateway
  • Deployed a backend service, and a gateway
  • Configured the gateway using Kubernetes Gateway API resources Gateway and HttpRoute to direct incoming requests over HTTP to the backend service.

Here is a suggested list of follow-on tasks to guide you in your exploration of Envoy Gateway:

Review the Tasks section for the scenario matching your use case. The Envoy Gateway tasks are organized by category: traffic management, security, extensibility, observability, and operations.

Clean-Up

Use the steps in this section to uninstall everything from the quickstart.

Delete the GatewayClass, Gateway, HTTPRoute and Example App:

kubectl delete -f https://github.com/envoyproxy/gateway/releases/download/v1.1.2/quickstart.yaml --ignore-not-found=true

Delete the Gateway API CRDs and Envoy Gateway:

helm uninstall eg -n envoy-gateway-system

Next Steps

Checkout the Developer Guide to get involved in the project.

2 - Traffic

This section includes Traffic Management tasks.

2.1 - Backend Routing

Envoy Gateway supports routing to native K8s resources such as Service and ServiceImport. The Backend API is a custom Envoy Gateway extension resource that can used in Gateway-API BackendObjectReference.

Motivation

The Backend API was added to support several use cases:

  • Allowing users to integrate Envoy with services (Ext Auth, Rate Limit, ALS, …) using Unix Domain Sockets, which are currently not supported by K8s.
  • Simplify routing to cluster-external backends, which currently requires users to maintain both K8s Service and EndpointSlice resources.

Warning

Similar to the K8s EndpointSlice API, the Backend API can be misused to allow traffic to be sent to otherwise restricted destinations, as described in CVE-2021-25740. A Backend resource can be used to:

  • Expose a Service or Pod that should not be accessible
  • Reference a Service or Pod by a Route without appropriate Reference Grants
  • Expose the Envoy Proxy localhost (including the Envoy admin endpoint)

For these reasons, the Backend API is disabled by default in Envoy Gateway configuration. Envoy Gateway admins are advised to follow upstream recommendations and restrict access to the Backend API using K8s RBAC.

Restrictions

The Backend API is currently supported only in the following BackendReferences:

The Backend API supports attachment the following policies:

Certain restrictions apply on the value of hostnames and addresses. For example, the loopback IP address range and the localhost hostname are forbidden.

Envoy Gateway does not manage the lifecycle of unix domain sockets referenced by the Backend resource. Envoy Gateway admins are responsible for creating and mounting the sockets into the envoy proxy pod. The latter can be achieved by patching the envoy deployment using the EnvoyProxy resource.

Quickstart

Prerequisites

  • Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Enable Backend

  • By default Backend is disabled. Lets enable it in the EnvoyGateway startup configuration

  • The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it using a ConfigMap. In the next step, we will update this resource to enable Backend.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableBackend: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableBackend: true    
  • After updating the ConfigMap, you will need to restart the envoy-gateway deployment so the configuration kicks in
kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Testing

Route to External Backend

  • In the following example, we will create a Backend resource that routes to httpbin.org:80 and a HTTPRoute that references this backend.
cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: gateway.envoyproxy.io
          kind: Backend
          name: httpbin
      matches:
        - path:
            type: PathPrefix
            value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: httpbin
  namespace: default
spec:
  endpoints:
    - fqdn:
        hostname: httpbin.org
        port: 80
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: gateway.envoyproxy.io
          kind: Backend
          name: httpbin
      matches:
        - path:
            type: PathPrefix
            value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: httpbin
  namespace: default
spec:
  endpoints:
    - fqdn:
        hostname: httpbin.org
        port: 80

Get the Gateway address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Send a request and view the response:

curl -I -HHost:www.example.com http://${GATEWAY_HOST}/headers

2.2 - Circuit Breakers

Envoy circuit breakers can be used to fail quickly and apply back-pressure in response to upstream service degradation.

Envoy Gateway supports the following circuit breaker thresholds:

  • Concurrent Connections: limit the connections that Envoy can establish to the upstream service. When this threshold is met, new connections will not be established, and some requests will be queued until an existing connection becomes available.
  • Concurrent Requests: limit on concurrent requests in-flight from Envoy to the upstream service. When this threshold is met, requests will be queued.
  • Pending Requests: limit the pending request queue size. When this threshold is met, overflowing requests will be terminated with a 503 status code.

Envoy’s circuit breakers are distributed: counters are not synchronized across different Envoy processes. The default Envoy and Envoy Gateway circuit breaker threshold values (1024) may be too strict for high-throughput systems.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired circuit breaker thresholds. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Note: There are distinct circuit breaker counters for each BackendReference in an xRoute rule. Even if a BackendTrafficPolicy targets a Gateway, each BackendReference in that gateway still has separate circuit breaker counter.

Prerequisites

Install Envoy Gateway

  • Follow the installation step from the Quickstart to install Envoy Gateway and sample resources.

Install the hey load testing tool

  • The hey CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.

Test and customize circuit breaker settings

This example will simulate a degraded backend that responds within 10 seconds by adding the ?delay=10s query parameter to API calls. The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/?delay=10s
Summary:
  Total:	10.3426 secs
  Slowest:	10.3420 secs
  Fastest:	10.0664 secs
  Average:	10.2145 secs
  Requests/sec:	9.6687

  Total data:	36600 bytes
  Size/request:	366 bytes

Response time histogram:
  10.066 [1]	|■■■■
  10.094 [4]	|■■■■■■■■■■■■■■■
  10.122 [9]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.149 [10]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.177 [10]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.204 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.232 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.259 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.287 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.314 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.342 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

The default circuit breaker threshold (1024) is not met. As a result, requests do not overflow: all requests are proxied upstream and both Envoy and clients wait for 10s.

In order to fail fast, apply a BackendTrafficPolicy that limits concurrent requests to 10 and pending requests to 0.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: circuitbreaker-for-route
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  circuitBreaker:
    maxPendingRequests: 0
    maxParallelRequests: 10
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: circuitbreaker-for-route
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  circuitBreaker:
    maxPendingRequests: 0
    maxParallelRequests: 10

Execute the load simulation again.

hey -n 100 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/?delay=10s
Summary:
  Total:	10.1230 secs
  Slowest:	10.1224 secs
  Fastest:	0.0529 secs
  Average:	1.0677 secs
  Requests/sec:	9.8785

  Total data:	10940 bytes
  Size/request:	109 bytes

Response time histogram:
  0.053 [1]	|
  1.060 [89]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  2.067 [0]	|
  3.074 [0]	|
  4.081 [0]	|
  5.088 [0]	|
  6.095 [0]	|
  7.102 [0]	|
  8.109 [0]	|
  9.115 [0]	|
  10.122 [10]	|■■■■

With the new circuit breaker settings, and due to the slowness of the backend, only the first 10 concurrent requests were proxied, while the other 90 overflowed.

  • Overflowing Requests failed fast, reducing proxy resource consumption.
  • Upstream traffic was limited, alleviating the pressure on the degraded service.

2.3 - Client Traffic Policy

This task explains the usage of the ClientTrafficPolicy API.

Introduction

The ClientTrafficPolicy API allows system administrators to configure the behavior for how the Envoy Proxy server behaves with downstream clients.

Motivation

This API was added as a new policy attachment resource that can be applied to Gateway resources and it is meant to hold settings for configuring behavior of the connection between the downstream client and Envoy Proxy listener. It is the counterpart to the BackendTrafficPolicy API resource.

Quickstart

Prerequisites

  • Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Support TCP keepalive for downstream client

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-tcp-keepalive-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  tcpKeepalive:
    idleTime: 20m
    interval: 60s
    probes: 3
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-tcp-keepalive-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  tcpKeepalive:
    idleTime: 20m
    interval: 60s
    probes: 3

Verify that ClientTrafficPolicy is Accepted:

kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default

You should see the policy marked as accepted like this:

NAME                          STATUS     AGE
enable-tcp-keepalive-policy   Accepted   5s

Curl the example app through Envoy proxy once again:

curl --verbose  --header "Host: www.example.com" http://$GATEWAY_HOST/get --next --header "Host: www.example.com" http://$GATEWAY_HOST/get

You should see the output like this:

*   Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 01 Dec 2023 10:17:04 GMT
< content-length: 507
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.1.2"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "4d0d33e8-d611-41f0-9da0-6458eec20fa5"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}* Found bundle for host: 0x7fb9f5204ea0 [serially]
* Can not multiplex, even if we wanted to
* Re-using existing connection #0 with host 172.18.255.202
> GET /headers HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 01 Dec 2023 10:17:04 GMT
< content-length: 511
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/headers",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.1.2"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "9a8874c0-c117-481c-9b04-933571732ca5"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}

You can see keepalive connection marked by the output in:

* Connection #0 to host 172.18.255.202 left intact
* Re-using existing connection #0 with host 172.18.255.202

Enable Proxy Protocol for downstream client

This example configures Proxy Protocol for downstream clients.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-proxy-protocol-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  enableProxyProtocol: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-proxy-protocol-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  enableProxyProtocol: true

Verify that ClientTrafficPolicy is Accepted:

kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default

You should see the policy marked as accepted like this:

NAME                          STATUS     AGE
enable-proxy-protocol-policy   Accepted   5s

Try the endpoint without using PROXY protocol with curl:

curl -v --header "Host: www.example.com" http://$GATEWAY_HOST/get
*   Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

Curl the example app through Envoy proxy once again, now sending HAProxy PROXY protocol header at the beginning of the connection with –haproxy-protocol flag:

curl --verbose --haproxy-protocol --header "Host: www.example.com" http://$GATEWAY_HOST/get

You should now expect 200 response status and also see that source IP was preserved in the X-Forwarded-For header.

*   Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Mon, 04 Dec 2023 21:11:43 GMT
< content-length: 510
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.1.2"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "192.168.255.6"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "290e4b61-44b7-4e5c-a39c-0ec76784e897"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}

Configure Client IP Detection

This example configures the number of additional ingress proxy hops from the right side of XFF HTTP headers to trust when determining the origin client’s IP address and determines whether or not x-forwarded-proto headers will be trusted. Refer to https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-for for details.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: http-client-ip-detection
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  clientIPDetection:
    xForwardedFor:
      numTrustedHops: 2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: http-client-ip-detection
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  clientIPDetection:
    xForwardedFor:
      numTrustedHops: 2

Verify that ClientTrafficPolicy is Accepted:

kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default

You should see the policy marked as accepted like this:

NAME                          STATUS     AGE
http-client-ip-detection   Accepted   5s

Open port-forward to the admin interface port:

kubectl port-forward deploy/${ENVOY_DEPLOYMENT} -n envoy-gateway-system 19000:19000

Curl the admin interface port to fetch the configured value for xff_num_trusted_hops:

curl -s 'http://localhost:19000/config_dump?resource=dynamic_listeners' \
  | jq -r '.configs[0].active_state.listener.default_filter_chain.filters[0].typed_config 
      | with_entries(select(.key | match("xff|remote_address|original_ip")))'

You should expect to see the following:

{
  "use_remote_address": true,
  "xff_num_trusted_hops": 2
}

Curl the example app through Envoy proxy:

curl -v http://$GATEWAY_HOST/get \
  -H "Host: www.example.com" \
  -H "X-Forwarded-Proto: https" \
  -H "X-Forwarded-For: 1.1.1.1,2.2.2.2"

You should expect 200 response status, see that X-Forwarded-Proto was preserved and X-Envoy-External-Address was set to the leftmost address in the X-Forwarded-For header:

*   Trying [::1]:8888...
* Connected to localhost (::1) port 8888
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> X-Forwarded-Proto: https
> X-Forwarded-For: 1.1.1.1,2.2.2.2
> 
Handling connection for 8888
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Tue, 30 Jan 2024 15:19:22 GMT
< content-length: 535
< x-envoy-upstream-service-time: 0
< server: envoy
< 
{
 "path": "/get",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.4.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-External-Address": [
   "1.1.1.1"
  ],
  "X-Forwarded-For": [
   "1.1.1.1,2.2.2.2,10.244.0.9"
  ],
  "X-Forwarded-Proto": [
   "https"
  ],
  "X-Request-Id": [
   "53ccfad7-1899-40fa-9322-ddb833aa1ac3"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-8psnc"
* Connection #0 to host localhost left intact
}

Enable HTTP Request Received Timeout

This feature allows you to limit the time taken by the Envoy Proxy fleet to receive the entire request from the client, which is useful in preventing certain clients from consuming too much memory in Envoy This example configures the HTTP request timeout for the client, please check out the details here.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      requestReceivedTimeout: 2s
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      requestReceivedTimeout: 2s

Curl the example app through Envoy proxy:

curl -v http://$GATEWAY_HOST/get \
  -H "Host: www.example.com" \
  -H "Content-Length: 10000"

You should expect 428 response status after 2s:

curl -v http://$GATEWAY_HOST/get \
  -H "Host: www.example.com" \
  -H "Content-Length: 10000"
*   Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> Content-Length: 10000
>
< HTTP/1.1 408 Request Timeout
< content-length: 15
< content-type: text/plain
< date: Tue, 27 Feb 2024 07:38:27 GMT
< connection: close
<
* Closing connection
request timeout

Configure Client HTTP Idle Timeout

The idle timeout is defined as the period in which there are no active requests. When the idle timeout is reached the connection will be closed. For more details see here.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      idleTimeout: 5s
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      idleTimeout: 5s

Curl the example app through Envoy proxy:

openssl s_client -crlf -connect $GATEWAY_HOST:443

You should expect the connection to be closed after 5s.

You can also check the number of connections closed due to idle timeout by using the following query:

envoy_http_downstream_cx_idle_timeout{envoy_http_conn_manager_prefix="<name of connection manager>"} 

The number of connections closed due to idle timeout should be increased by 1.

Configure Downstream Per Connection Buffer Limit

This feature allows you to set a soft limit on size of the listener’s new connection read and write buffers. The size is configured using the resource.Quantity format, see examples here.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    bufferLimit: 1024
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    bufferLimit: 1024

2.4 - Connection Limit

The connection limit features allows users to limit the number of concurrently active TCP connections on a Gateway or a Listener. When the connection limit is reached, new connections are closed immediately by Envoy proxy. It’s possible to configure a delay for connection rejection.

Users may want to limit the number of connections for several reasons:

  • Protect resources like CPU and Memory.
  • Ensure that different listeners can receive a fair share of global resources.
  • Protect from malicious activity like DoS attacks.

Envoy Gateway introduces a new CRD called Client Traffic Policy that allows the user to describe their desired connection limit settings. This instantiated resource can be linked to a Gateway.

The Envoy connection limit implementation is distributed: counters are not synchronized between different envoy proxies.

When a Client Traffic Policy is attached to a gateway, the connection limit will apply differently based on the Listener protocol in use:

  • HTTP: all HTTP listeners in a Gateway will share a common connection counter, and a limit defined by the policy.
  • HTTPS/TLS: each HTTPS/TLS listener will have a dedicated connection counter, and a limit defined by the policy.

Prerequisites

Install Envoy Gateway

  • Follow the steps from the Quickstart to install Envoy Gateway and the HTTPRoute example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Install the hey load testing tool

  • The hey CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.

Test and customize connection limit settings

This example we use hey to open 10 connections and execute 1 RPS per connection for 10 seconds.

hey -c 10 -q 1 -z 10s  -host "www.example.com" http://${GATEWAY_HOST}/get
Summary:
  Total:	10.0058 secs
  Slowest:	0.0275 secs
  Fastest:	0.0029 secs
  Average:	0.0111 secs
  Requests/sec:	9.9942

[...]

Status code distribution:
  [200]	100 responses

There are no connection limits, and so all 100 requests succeed.

Next, we apply a limit of 5 connections.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: connection-limit-ctp
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    connectionLimit:
      value: 5    
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: connection-limit-ctp
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    connectionLimit:
      value: 5    

Execute the load simulation again.

hey -c 10 -q 1 -z 10s  -host "www.example.com" http://${GATEWAY_HOST}/get
Summary:
  Total:	11.0327 secs
  Slowest:	0.0361 secs
  Fastest:	0.0013 secs
  Average:	0.0088 secs
  Requests/sec:	9.0640

[...] 

Status code distribution:
  [200]	50 responses

Error distribution:
  [50]	Get "http://localhost:8888/get": EOF

With the new connection limit, only 5 of 10 connections are established, and so only 50 requests succeed.

2.5 - Fault Injection

Envoy fault injection can be used to inject delays and abort requests to mimic failure scenarios such as service failures and overloads.

Envoy Gateway supports the following fault scenarios:

  • delay fault: inject a custom fixed delay into the request with a certain probability to simulate delay failures.
  • abort fault: inject a custom response code into the response with a certain probability to simulate abort failures.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired fault scenarios. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. For GRPC - follow the steps from the GRPC Routing example. Before proceeding, you should be able to query the example backend using HTTP or GRPC.

Install the hey load testing tool

  • The hey CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.

Configuration

Allow requests with a valid faultInjection by creating an BackendTrafficPolicy and attaching it to the example HTTPRoute or GRPCRoute.

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-50-percent-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: foo
  faultInjection:
    abort:
      httpStatus: 501
      percentage: 50
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-delay
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: bar
  faultInjection:
    delay:
      fixedDelay: 2s
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: foo
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: bar
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /bar
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-50-percent-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: foo
  faultInjection:
    abort:
      httpStatus: 501
      percentage: 50
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-delay
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: bar
  faultInjection:
    delay:
      fixedDelay: 2s
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: foo
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: bar
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /bar

Two HTTPRoute resources were created, one for /foo and another for /bar. fault-injection-abort BackendTrafficPolicy has been created and targeted HTTPRoute foo to abort requests for /foo. fault-injection-delay BackendTrafficPolicy has been created and targeted HTTPRoute foo to delay 2s requests for /bar.

Verify the HTTPRoute configuration and status:

kubectl get httproute/foo -o yaml
kubectl get httproute/bar -o yaml

Verify the BackendTrafficPolicy configuration:

kubectl get backendtrafficpolicy/fault-injection-50-percent-abort -o yaml
kubectl get backendtrafficpolicy/fault-injection-delay -o yaml

GRPCRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: GRPCRoute
    name: yages
  faultInjection:
    abort:
      grpcStatus: 14
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
  - name: example-gateway
  hostnames:
  - "grpc-example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: yages
      port: 9000
      weight: 1
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: GRPCRoute
    name: yages
  faultInjection:
    abort:
      grpcStatus: 14
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
  - name: example-gateway
  hostnames:
  - "grpc-example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: yages
      port: 9000
      weight: 1

A BackendTrafficPolicy has been created and targeted GRPCRoute yages to abort requests for yages service..

Verify the GRPCRoute configuration and status:

kubectl get grpcroute/yages -o yaml

Verify the SecurityPolicy configuration:

kubectl get backendtrafficpolicy/fault-injection-abort -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

HTTPRoute

Verify that requests to foo route are aborted.

hey -n 1000 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/foo
Status code distribution:
  [200]	501 responses
  [501]	499 responses

Verify that requests to bar route are delayed.

hey -n 1000 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/bar
Summary:
  Total:	20.1493 secs
  Slowest:	2.1020 secs
  Fastest:	1.9940 secs
  Average:	2.0123 secs
  Requests/sec:	49.6295

  Total data:	557000 bytes
  Size/request:	557 bytes

Response time histogram:
  1.994 [1]	|
  2.005 [475]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  2.016 [419]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  2.026 [5]	|
  2.037 [0]	|
  2.048 [0]	|
  2.059 [30]	|■■■
  2.070 [0]	|
  2.080 [0]	|
  2.091 [11]	|■
  2.102 [59]	|■■■■■

GRPCRoute

Verify that requests to yagesservice are aborted.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

You should see the below response

Error invoking method "yages.Echo/Ping": rpc error: code = Unavailable desc = failed to query for service descriptor "yages.Echo": fault filter abort

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the BackendTrafficPolicy:

kubectl delete BackendTrafficPolicy/fault-injection-abort

2.6 - Gateway Address

The Gateway API provides an optional Addresses field through which Envoy Gateway can set addresses for Envoy Proxy Service. Depending on the Service Type, the addresses of gateway can be used as:

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest.

External IPs

Using the addresses in Gateway.Spec.Addresses as the External IPs of Envoy Proxy Service, this will require the address to be of type IPAddress and the ServiceType to be of LoadBalancer or NodePort.

The Envoy Gateway deploys Envoy Proxy Service as LoadBalancer by default, so you can set the address of the Gateway directly (the address settings here are for reference only):

kubectl patch gateway eg --type=json --patch '
- op: add
  path: /spec/addresses
  value:
   - type: IPAddress
     value: 1.2.3.4
'

Verify the Gateway status:

kubectl get gateway
NAME   CLASS   ADDRESS   PROGRAMMED   AGE
eg     eg      1.2.3.4   True         14m

Verify the Envoy Proxy Service status:

kubectl get service -n envoy-gateway-system
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
envoy-default-eg-64656661       LoadBalancer   10.96.236.219   1.2.3.4       80:31017/TCP   15m
envoy-gateway                   ClusterIP      10.96.192.76    <none>        18000/TCP      15m
envoy-gateway-metrics-service   ClusterIP      10.96.124.73    <none>        8443/TCP       15m

Note: If the Gateway.Spec.Addresses is explicitly set, it will be the only addresses that populates the Gateway status.

Cluster IP

Using the addresses in Gateway.Spec.Addresses as the Cluster IP of Envoy Proxy Service, this will require the address to be of type IPAddress and the ServiceType to be of ClusterIP.

2.7 - Gateway API Support

As mentioned in the system design document, Envoy Gateway’s managed data plane is configured dynamically through Kubernetes resources, primarily Gateway API objects. Envoy Gateway supports configuration using the following Gateway API resources.

GatewayClass

A GatewayClass represents a “class” of gateways, i.e. which Gateways should be managed by Envoy Gateway. Envoy Gateway supports managing a single GatewayClass resource that matches its configured controllerName and follows Gateway API guidelines for resolving conflicts when multiple GatewayClasses exist with a matching controllerName.

Note: If specifying GatewayClass parameters reference, it must refer to an EnvoyProxy resource.

Gateway

When a Gateway resource is created that references the managed GatewayClass, Envoy Gateway will create and manage a new Envoy Proxy deployment. Gateway API resources that reference this Gateway will configure this managed Envoy Proxy deployment.

HTTPRoute

A HTTPRoute configures routing of HTTP traffic through one or more Gateways. The following HTTPRoute filters are supported by Envoy Gateway:

  • requestHeaderModifier: RequestHeaderModifiers can be used to modify or add request headers before the request is proxied to its destination.
  • responseHeaderModifier: ResponseHeaderModifiers can be used to modify or add response headers before the response is sent back to the client.
  • requestMirror: RequestMirrors configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.
  • requestRedirect: RequestRedirects configure policied for how requests that match the HTTPRoute should be modified and then redirected.
  • urlRewrite: UrlRewrites allow for modification of the request’s hostname and path before it is proxied to its destination.
  • extensionRef: ExtensionRefs are used by Envoy Gateway to implement extended filters. Currently, Envoy Gateway supports rate limiting and request authentication filters. For more information about these filters, refer to the rate limiting and request authentication documentation.

Notes:

  • The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other destinations such as arbitrary URLs is not possible.
  • Only requestHeaderModifier and responseHeaderModifier filters are currently supported within HTTPBackendRef.

TCPRoute

A TCPRoute configures routing of raw TCP traffic through one or more Gateways. Traffic can be forwarded to the desired BackendRefs based on a TCP port number.

Note: A TCPRoute only supports proxying in non-transparent mode, i.e. the backend will see the source IP and port of the Envoy Proxy instance instead of the client.

UDPRoute

A UDPRoute configures routing of raw UDP traffic through one or more Gateways. Traffic can be forwarded to the desired BackendRefs based on a UDP port number.

Note: Similar to TCPRoutes, UDPRoutes only support proxying in non-transparent mode i.e. the backend will see the source IP and port of the Envoy Proxy instance instead of the client.

GRPCRoute

A GRPCRoute configures routing of gRPC requests through one or more Gateways. They offer request matching by hostname, gRPC service, gRPC method, or HTTP/2 Header. Envoy Gateway supports the following filters on GRPCRoutes to provide additional traffic processing:

  • requestHeaderModifier: RequestHeaderModifiers can be used to modify or add request headers before the request is proxied to its destination.
  • responseHeaderModifier: ResponseHeaderModifiers can be used to modify or add response headers before the response is sent back to the client.
  • requestMirror: RequestMirrors configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.

Notes:

  • The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other destinations such as arbitrary URLs is not currently possible.
  • Only requestHeaderModifier and responseHeaderModifier filters are currently supported within GRPCBackendRef.

TLSRoute

A TLSRoute configures routing of TCP traffic through one or more Gateways. However, unlike TCPRoutes, TLSRoutes can match against TLS-specific metadata.

ReferenceGrant

A ReferenceGrant is used to allow a resource to reference another resource in a different namespace. Normally an HTTPRoute created in namespace foo is not allowed to reference a Service in namespace bar. A ReferenceGrant permits these types of cross-namespace references. Envoy Gateway supports the following ReferenceGrant use-cases:

  • Allowing an HTTPRoute, GRPCRoute, TLSRoute, UDPRoute, or TCPRoute to reference a Service in a different namespace.
  • Allowing an HTTPRoute’s requestMirror filter to include a BackendRef that references a Service in a different namespace.
  • Allowing a Gateway’s SecretObjectReference to reference a secret in a different namespace.

2.8 - Global Rate Limit

Rate limit is a feature that allows the user to limit the number of incoming requests to a predefined value based on attributes within the traffic flow.

Here are some reasons why you may want to implement Rate limits

  • To prevent malicious activity such as DDoS attacks.
  • To prevent applications and its resources (such as a database) from getting overloaded.
  • To create API limits based on user entitlements.

Envoy Gateway supports two types of rate limiting: Global rate limiting and Local rate limiting.

Global rate limiting applies a shared rate limit to the traffic flowing through all the instances of Envoy proxies where it is configured. i.e. if the data plane has 2 replicas of Envoy running, and the rate limit is 10 requests/second, this limit is shared and will be hit if 5 requests pass through the first replica and 5 requests pass through the second replica within the same second.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their rate limit intent. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Note: Limit is applied per route. Even if a BackendTrafficPolicy targets a gateway, each route in that gateway still has a separate rate limit bucket. For example, if a gateway has 2 routes, and the limit is 100r/s, then each route has its own 100r/s rate limit bucket.

Prerequisites

Install Envoy Gateway

  • Follow the steps from the Quickstart to install Envoy Gateway and the HTTPRoute example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Install Redis

  • The global rate limit feature is based on Envoy Ratelimit which requires a Redis instance as its caching layer. Lets install a Redis deployment in the redis-system namespce.
cat <<EOF | kubectl apply -f -
kind: Namespace
apiVersion: v1
metadata:
  name: redis-system 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: redis-system
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:6.0.6
        imagePullPolicy: IfNotPresent
        name: redis
        resources:
          limits:
            cpu: 1500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: redis-system 
  labels:
    app: redis
  annotations:
spec:
  ports:
  - name: redis
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: redis
EOF

Save and apply the following resources to your cluster:

---
kind: Namespace
apiVersion: v1
metadata:
  name: redis-system 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: redis-system
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:6.0.6
        imagePullPolicy: IfNotPresent
        name: redis
        resources:
          limits:
            cpu: 1500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: redis-system 
  labels:
    app: redis
  annotations:
spec:
  ports:
  - name: redis
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: redis

Enable Global Rate limit in Envoy Gateway

  • The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it using a ConfigMap. In the next step, we will update this resource to enable rate limit in Envoy Gateway as well as configure the URL for the Redis instance used for Global rate limiting.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379    
  • After updating the ConfigMap, you will need to restart the envoy-gateway deployment so the configuration kicks in
kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Rate Limit Specific User

Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id header with a value set to one.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
        limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
        limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-ratelimit -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Let’s query ratelimit.example/get 4 times. We should receive a 200 response from the example Gateway for the first 3 requests and then receive a 429 status code for the 4th request since the limit is set at 3 requests/Hour for the request which contains the header x-user-id and value one.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

You should be able to send requests with the x-user-id header and a different value and receive successful responses from the server.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:36 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:37 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:38 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:39 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

Rate Limit Distinct Users

Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the value in the x-user-id header. Here, user one (recognised from the traffic flow using the header x-user-id and value one) will be rate limited at 3 requests/hour and so will user two (recognised from the traffic flow using the header x-user-id and value two).

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - type: Distinct
            name: x-user-id
        limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - type: Distinct
            name: x-user-id
        limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

Lets run the same command again with the header x-user-id and value one set in the request. We should the first 3 requests succeeding and the 4th request being rate limited.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

You should see the same behavior when the value for header x-user-id is set to two and 4 requests are sent.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

Rate Limit All Requests

This example shows you how to rate limit all requests matching the HTTPRoute rule at 3 requests/Hour by leaving the clientSelectors field unset.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

Rate Limit Client IP Addresses

Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on their IP address (also reflected in the X-Forwarded-For header).

Note: EG supports two kinds of rate limit for the IP address: Exact and Distinct.

  • Exact means that all IP addresses within the specified Source IP CIDR share the same rate limit bucket.
  • Distinct means that each IP address within the specified Source IP CIDR has its own rate limit bucket.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - sourceCIDR: 
            value: 0.0.0.0/0
            type: Distinct
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - sourceCIDR: 
            value: 0.0.0.0/0
            type: distinct
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:45 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:46 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:48 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Tue, 28 Mar 2023 08:28:48 GMT
server: envoy
transfer-encoding: chunked

Rate Limit Jwt Claims

Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the value of the Jwt claims carried.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example
  jwt:
    providers:
    - name: example
      remoteJWKS:
        uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
      claimToHeaders:
      - claim: name
        header: x-claim-name
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example 
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-claim-name
            value: John Doe
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: example
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example
  jwt:
    providers:
    - name: example
      remoteJWKS:
        uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
      claimToHeaders:
      - claim: name
        header: x-claim-name
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example 
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-claim-name
            value: John Doe
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: example
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo

Get the JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
TOKEN1=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/with-different-claim.jwt -s) && echo "$TOKEN1" | cut -d '.' -f2 - | base64 --decode

Rate limit by carrying TOKEN

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:25 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy


HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:26 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy


HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:27 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy


HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Mon, 12 Jun 2023 12:00:28 GMT
server: envoy
transfer-encoding: chunked

No Rate Limit by carrying TOKEN1

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN1" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:34 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:35 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:36 GMT
content-length: 556
x-envoy-upstream-service-time: 1
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:37 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy

(Optional) Editing Kubernetes Resources settings for the Rate Limit Service

  • The default installation of Envoy Gateway installs a default EnvoyGateway configuration and provides the initial rate limit kubernetes resources settings. such as replicas is 1, requests resources cpu is 100m, memory is 512Mi. the others like container image, securityContext, env and pod annotations and securityContext can be modified by modifying the ConfigMap.

  • tls.certificateRef set the client certificate for redis server TLS connections.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
      kubernetes:
        rateLimitDeployment:
          replicas: 1
          container:
            image: envoyproxy/ratelimit:master
            env:
            - name: CACHE_KEY_PREFIX
              value: "eg:rl:"
            resources:
              requests:
                cpu: 100m
                memory: 512Mi
            securityContext:
              runAsUser: 2000
              allowPrivilegeEscalation: false
          pod:
            annotations:
              key1: val1
              key2: val2
            securityContext:
              runAsUser: 1000
              runAsGroup: 3000
              fsGroup: 2000
              fsGroupChangePolicy: "OnRootMismatch"
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379
          tls:
            certificateRef:
              name: ratelimit-cert
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
      kubernetes:
        rateLimitDeployment:
          replicas: 1
          container:
            image: envoyproxy/ratelimit:master
            env:
            - name: CACHE_KEY_PREFIX
              value: "eg:rl:"
            resources:
              requests:
                cpu: 100m
                memory: 512Mi
            securityContext:
              runAsUser: 2000
              allowPrivilegeEscalation: false
          pod:
            annotations:
              key1: val1
              key2: val2
            securityContext:
              runAsUser: 1000
              runAsGroup: 3000
              fsGroup: 2000
              fsGroupChangePolicy: "OnRootMismatch"
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379
          tls:
            certificateRef:
              name: ratelimit-cert    
  • After updating the ConfigMap, you will need to restart the envoy-gateway deployment so the configuration kicks in
kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

2.9 - GRPC Routing

The GRPCRoute resource allows users to configure gRPC routing by matching HTTP/2 traffic and forwarding it to backend gRPC servers. To learn more about gRPC routing, refer to the Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Installation

Install the gRPC routing example resources:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/grpc-routing.yaml

The manifest installs a GatewayClass, Gateway, a Deployment, a Service, and a GRPCRoute resource. The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.

Note: Envoy Gateway is configured by default to manage a GatewayClass with controllerName: gateway.envoyproxy.io/gatewayclass-controller.

Verification

Check the status of the GatewayClass:

kubectl get gc --selector=example=grpc-routing

The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.

A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is provisioned or configured by Envoy Gateway. The gatewayClassName defines the name of a GatewayClass used by this Gateway. Check the status of the Gateway:

kubectl get gateways --selector=example=grpc-routing

The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also provides the address of the Gateway. This address is used later to test connectivity to proxied backend services.

Check the status of the GRPCRoute:

kubectl get grpcroutes --selector=example=grpc-routing -o yaml

The status for the GRPCRoute should surface “Accepted=True” and a parentRef that references the example Gateway. The example-route matches any traffic for “grpc-example.com” and forwards it to the “yages” Service.

Testing the Configuration

Before testing GRPC routing to the yages backend, get the Gateway’s address.

export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')

Test GRPC routing to the yages backend using the grpcurl command.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

You should see the below response

{
  "text": "pong"
}

Envoy Gateway also supports gRPC-Web requests for this configuration. The below curl command can be used to send a grpc-Web request with over HTTP/2. You should receive the same response seen in the previous command.

The data in the body AAAAAAA= is a base64 encoded representation of an empty message (data length 0) that the Ping RPC accepts.

curl --http2-prior-knowledge -s ${GATEWAY_HOST}:80/yages.Echo/Ping -H 'Host: grpc-example.com'   -H 'Content-Type: application/grpc-web-text'   -H 'Accept: application/grpc-web-text' -XPOST -d'AAAAAAA=' | base64 -d

GRPCRoute Match

The matches field can be used to restrict the route to a specific set of requests based on GRPC’s service and/or method names. It supports two match types: Exact and RegularExpression.

Exact

Exact match is the default match type.

The following example shows how to match a request based on the service and method names for grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo, as well as a match for all services with a method name Ping which matches yages.Echo/Ping in our deployment.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: Ping
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: Ping
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1

Verify the GRPCRoute status:

kubectl get grpcroutes --selector=example=grpc-routing -o yaml

Test GRPC routing to the yages backend using the grpcurl command.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

RegularExpression

The following example shows how to match a request based on the service and method names with match type RegularExpression. It matches all the services and methods with pattern /.*.Echo/Pin.+, which matches yages.Echo/Ping in our deployment.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: "Pin.+"
          service: ".*.Echo"
          type: RegularExpression
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: "Pin.+"
          service: ".*.Echo"
          type: RegularExpression
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1

Verify the GRPCRoute status:

kubectl get grpcroutes --selector=example=grpc-routing -o yaml

Test GRPC routing to the yages backend using the grpcurl command.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

2.10 - HTTP Redirects

The HTTPRoute resource can issue redirects to clients or rewrite paths sent upstream using filters. Note that HTTPRoute rules cannot use both filter types at once. Currently, Envoy Gateway only supports core HTTPRoute filters which consist of RequestRedirect and RequestHeaderModifier at the time of this writing. To learn more about HTTP routing, refer to the Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTPS.

Redirects

Redirects return HTTP 3XX responses to a client, instructing it to retrieve a different resource. A RequestRedirect filter instructs Gateways to emit a redirect response to requests that match the rule. For example, to issue a permanent redirect (301) from HTTP to HTTPS, configure requestRedirect.statusCode=301 and requestRedirect.scheme="https":

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-to-https-filter-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - redirect.example
  rules:
    - filters:
      - type: RequestRedirect
        requestRedirect:
          scheme: https
          statusCode: 301
          hostname: www.example.com
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-to-https-filter-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - redirect.example
  rules:
    - filters:
      - type: RequestRedirect
        requestRedirect:
          scheme: https
          statusCode: 301
          hostname: www.example.com

Note: 301 (default) and 302 are the only supported statusCodes.

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-to-https-filter-redirect -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying redirect.example/get should result in a 301 response from the example Gateway and redirecting to the configured redirect hostname.

$ curl -L -vvv --header "Host: redirect.example" "http://${GATEWAY_HOST}/get"
...
< HTTP/1.1 301 Moved Permanently
< location: https://www.example.com/get
...

If you followed the steps in the Secure Gateways task, you should be able to curl the redirect location.

HTTP –> HTTPS

Listeners expose the TLS setting on a per domain or subdomain basis. TLS settings of a listener are applied to all domains that satisfy the hostname criteria.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/CN=example.com' -keyout CA.key -out CA.crt
openssl req -out example.com.csr -newkey rsa:2048 -nodes -keyout tls.key -subj "/CN=example.com"

Generate a self-signed wildcard certificate for example.com with *.example.com extension

cat <<EOF | openssl x509 -req -days 365 -CA CA.crt -CAkey CA.key -set_serial 0 \
-subj "/CN=example.com" \
-in example.com.csr -out tls.crt -extensions v3_req  -extfile -
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1   = example.com
DNS.2   = *.example.com
EOF

Create the kubernetes tls secret

kubectl create secret tls example-com --key=tls.key --cert=tls.crt

Define a https listener on the existing gateway

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
    tls:
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
    tls:
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com

Check for any TLS certificate issues on the gateway.

kubectl -n default describe gateway eg

Create two HTTPRoutes and attach them to the HTTP and HTTPS listeners using the sectionName field.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: tls-redirect
spec:
  parentRefs:
    - name: eg
      sectionName: http
  hostnames:
    # - "*.example.com" # catch all hostnames
    - "www.example.com"
  rules:
    - filters:
        - type: RequestRedirect
          requestRedirect:
            scheme: https
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: tls-redirect
spec:
  parentRefs:
    - name: eg
      sectionName: http
  hostnames:
    # - "*.example.com" # catch all hostnames
    - "www.example.com"
  rules:
    - filters:
        - type: RequestRedirect
          requestRedirect:
            scheme: https
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Curl the example app through http listener:

curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get

Curl the example app through https listener:

curl -v -H 'Host:www.example.com' --resolve "www.example.com:443:$GATEWAY_HOST" \
--cacert CA.crt https://www.example.com:443/get

Path Redirects

Path redirects use an HTTP Path Modifier to replace either entire paths or path prefixes. For example, the HTTPRoute below will issue a 302 redirect to all path.redirect.example requests whose path begins with /get to /status/200.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-path-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.redirect.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: /get
      filters:
      - type: RequestRedirect
        requestRedirect:
          path:
            type: ReplaceFullPath
            replaceFullPath: /status/200
          statusCode: 302
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-path-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.redirect.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: /get
      filters:
      - type: RequestRedirect
        requestRedirect:
          path:
            type: ReplaceFullPath
            replaceFullPath: /status/200
          statusCode: 302

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-path-redirect -o yaml

Querying path.redirect.example should result in a 302 response from the example Gateway and a redirect location containing the configured redirect path.

Query the path.redirect.example host:

curl -vvv --header "Host: path.redirect.example" "http://${GATEWAY_HOST}/get"

You should receive a 302 with a redirect location of http://path.redirect.example/status/200.

2.11 - HTTP Request Headers

The HTTPRoute resource can modify the headers of a request before forwarding it to the upstream service. HTTPRoute rules cannot use both filter types at once. Currently, Envoy Gateway only supports core HTTPRoute filters which consist of RequestRedirect and RequestHeaderModifier at the time of this writing. To learn more about HTTP routing, refer to the Gateway API documentation.

A RequestHeaderModifier filter instructs Gateways to modify the headers in requests that match the rule before forwarding the request upstream. Note that the RequestHeaderModifier filter will only modify headers before the request is sent from Envoy to the upstream service and will not affect response headers returned to the downstream client.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Adding Request Headers

The RequestHeaderModifier filter can add new headers to a request before it is sent to the upstream. If the request does not have the header configured by the filter, then that header will be added to the request. If the request already has the header configured by the filter, then the value of the header in the filter will be appended to the value of the header in the request.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-headers -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the upstream example app received the header add-header with the value: something,foo

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
 "headers": {
  "Accept": [
   "*/*"
  ],
  "Add-Header": [
   "something",
   "foo"
  ],
...

Setting Request Headers

Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it will be added, but unlike adding request headers which will append the value of the header if the request already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the upstream example app received the header add-header with the original value something replaced by foo.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "set-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
 "headers": {
  "Accept": [
   "*/*"
  ],
  "Set-Header": [
   "foo"
  ],
...

Removing Request Headers

Headers can be removed from a request by simply supplying a list of header names.

Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it will be added, but unlike adding request headers which will append the value of the header if the request already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        remove:
        - "remove-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        remove:
        - "remove-header"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the upstream example app received the header add-header, but the header remove-header that was sent by curl was removed before the upstream received the request.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something" --header "remove-header: foo"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<

 "headers": {
  "Accept": [
   "*/*"
  ],
  "Add-Header": [
   "something"
  ],
...

Combining Filters

Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"

2.12 - HTTP Response Headers

The HTTPRoute resource can modify the headers of a response before responding it to the downstream service. To learn more about HTTP routing, refer to the Gateway API documentation.

A ResponseHeaderModifier filter instructs Gateways to modify the headers in responses that match the rule before responding to the downstream. Note that the ResponseHeaderModifier filter will only modify headers before the response is returned from Envoy to the downstream client and will not affect request headers forwarding to the upstream service.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Adding Response Headers

The ResponseHeaderModifier filter can add new headers to a response before it is sent to the upstream. If the response does not have the header configured by the filter, then that header will be added to the response. If the response already has the header configured by the filter, then the value of the header in the filter will be appended to the value of the header in the response.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-headers -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the downstream client received the header add-header with the value: foo

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: X-Foo: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: X-Foo: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< x-foo: value1
< add-header: foo
<
...
 "headers": {
  "Accept": [
   "*/*"
  ],
  "X-Echo-Set-Header": [
   "X-Foo: value1"
  ]
...

Setting Response Headers

Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it will be added, but unlike adding response headers which will append the value of the header if the response already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the downstream client received the header set-header with the original value value1 replaced by foo.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: set-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: set-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< set-header: foo
<
 "headers": {
  "Accept": [
   "*/*"
  ],
  "X-Echo-Set-Header": [
    "set-header": value1"
  ]
...

Removing Response Headers

Headers can be removed from a response by simply supplying a list of header names.

Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it will be added, but unlike adding response headers which will append the value of the header if the response already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        remove:
        - "remove-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        remove:
        - "remove-header"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the header remove-header that was sent by curl was removed before the upstream received the response.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: remove-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: remove-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<

 "headers": {
  "Accept": [
   "*/*"
  ],
  "X-Echo-Set-Header": [
    "remove-header": value1"
  ]
...

Combining Filters

Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"

2.13 - HTTP Routing

The HTTPRoute resource allows users to configure HTTP routing by matching HTTP traffic and forwarding it to Kubernetes backends. Currently, the only supported backend supported by Envoy Gateway is a Service resource. This task shows how to route traffic based on host, header, and path fields and forward the traffic to different Kubernetes Services. To learn more about HTTP routing, refer to the Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Installation

Install the HTTP routing example resources:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/http-routing.yaml

The manifest installs a GatewayClass, Gateway, four Deployments, four Services, and three HTTPRoute resources. The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.

Note: Envoy Gateway is configured by default to manage a GatewayClass with controllerName: gateway.envoyproxy.io/gatewayclass-controller.

Verification

Check the status of the GatewayClass:

kubectl get gc --selector=example=http-routing

The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.

A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is provisioned or configured by Envoy Gateway. The gatewayClassName defines the name of a GatewayClass used by this Gateway. Check the status of the Gateway:

kubectl get gateways --selector=example=http-routing

The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also provides the address of the Gateway. This address is used later to test connectivity to proxied backend services.

The three HTTPRoute resources create routing rules on the Gateway. In order to receive traffic from a Gateway, an HTTPRoute must be configured with parentRefs which reference the parent Gateway(s) that it should be attached to. An HTTPRoute can match against a single set of hostnames. These hostnames are matched before any other matching within the HTTPRoute takes place. Since example.com, foo.example.com, and bar.example.com are separate hosts with different routing requirements, each is deployed as its own HTTPRoute - example-route, ``foo-route, and bar-route.

Check the status of the HTTPRoutes:

kubectl get httproutes --selector=example=http-routing -o yaml

The status for each HTTPRoute should surface “Accepted=True” and a parentRef that references the example Gateway. The example-route matches any traffic for “example.com” and forwards it to the “example-svc” Service.

Testing the Configuration

Before testing HTTP routing to the example-svc backend, get the Gateway’s address.

export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')

Test HTTP routing to the example-svc backend.

curl -vvv --header "Host: example.com" "http://${GATEWAY_HOST}/"

A 200 status code should be returned and the body should include "pod": "example-backend-*" indicating the traffic was routed to the example backend service. If you change the hostname to a hostname not represented in any of the HTTPRoutes, e.g. “www.example.com”, the HTTP traffic will not be routed and a 404 should be returned.

The foo-route matches any traffic for foo.example.com and applies its routing rules to forward the traffic to the “foo-svc” Service. Since there is only one path prefix match for /login, only foo.example.com/login/* traffic will be forwarded. Test HTTP routing to the foo-svc backend.

curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/login"

A 200 status code should be returned and the body should include "pod": "foo-backend-*" indicating the traffic was routed to the foo backend service. Traffic to any other paths that do not begin with /login will not be matched by this HTTPRoute. Test this by removing /login from the request.

curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/"

The HTTP traffic will not be routed and a 404 should be returned.

Similarly, the bar-route HTTPRoute matches traffic for bar.example.com. All traffic for this hostname will be evaluated against the routing rules. The most specific match will take precedence which means that any traffic with the env:canary header will be forwarded to bar-svc-canary and if the header is missing or not canary then it’ll be forwarded to bar-svc. Test HTTP routing to the bar-svc backend.

curl -vvv --header "Host: bar.example.com" "http://${GATEWAY_HOST}/"

A 200 status code should be returned and the body should include "pod": "bar-backend-*" indicating the traffic was routed to the foo backend service.

Test HTTP routing to the bar-canary-svc backend by adding the env: canary header to the request.

curl -vvv --header "Host: bar.example.com" --header "env: canary" "http://${GATEWAY_HOST}/"

A 200 status code should be returned and the body should include "pod": "bar-canary-backend-*" indicating the traffic was routed to the foo backend service.

JWT Claims Based Routing

Users can route to a specific backend by matching on JWT claims. This can be achieved, by defining a SecurityPolicy with a jwt configuration that does the following

  • Converts jwt claims to headers, which can be used for header based routing
  • Sets the recomputeRoute field to true. This is required so that the incoming request matches on a fallback/catch all route where the JWT can be authenticated, the claims from the JWT can be converted to headers, and then the route match can be recomputed to match based on the updated headers.

For this feature to work please make sure

  • you have a fallback route rule defined, the backend for this route rule can be invalid.
  • The SecurityPolicy is applied to both the fallback route as well as the route with the claim header matches, to avoid spoofing.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: jwt-claim-routing
  jwt:
    providers:
      - name: example
        recomputeRoute: true
        claimToHeaders:
          - claim: sub
            header: x-sub
          - claim: admin
            header: x-admin
          - claim: name
            header: x-name
        remoteJWKS:
          uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: jwt-claim-routing
spec:
  parentRefs:
    - name: eg
  rules:
    - backendRefs:
        - kind: Service
          name: foo-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: John Doe
    - backendRefs:
        - kind: Service
          name: bar-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: Tom
    # catch all
    - backendRefs:
        - kind: Service
          name: infra-backend-invalid
          port: 8080
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: jwt-claim-routing
  jwt:
    providers:
      - name: example
        recomputeRoute: true
        claimToHeaders:
          - claim: sub
            header: x-sub
          - claim: admin
            header: x-admin
          - claim: name
            header: x-name
        remoteJWKS:
          uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: jwt-claim-routing
spec:
  parentRefs:
    - name: eg
  rules:
    - backendRefs:
        - kind: Service
          name: foo-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: John Doe
    - backendRefs:
        - kind: Service
          name: bar-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: Tom
    # catch all
    - backendRefs:
        - kind: Service
          name: infra-backend-invalid
          port: 8080
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Get the JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode

Test routing to the foo-svc backend by specifying a JWT Token with a claim name: John Doe.

curl -sS -H "Host: foo.example.com" -H "Authorization: Bearer $TOKEN" "http://${GATEWAY_HOST}/login" | jq .pod
"foo-backend-6df8cc6b9f-fmwcg"

Get another JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/with-different-claim.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode

Test HTTP routing to the bar-svc backend by specifying a JWT Token with a claim name: Tom.

curl -sS -H "Host: bar.example.com" -H "Authorization: Bearer $TOKEN" "http://${GATEWAY_HOST}/" | jq .pod
"bar-backend-6688b8944c-s8htr"

2.14 - HTTP Timeouts

The default request timeout is set to 15 seconds in Envoy Proxy. The HTTPRouteTimeouts resource allows users to configure request timeouts for an HTTPRouteRule. This task shows you how to configure timeouts.

The HTTPRouteTimeouts supports two kinds of timeouts:

  • request: Request specifies the maximum duration for a gateway to respond to an HTTP request.
  • backendRequest: BackendRequest specifies a timeout for an individual request from the gateway to a backend.

Note: The Request duration must be >= BackendRequest duration

Installation

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verification

backend has the ability to delay responses; we use it as the backend to control response time.

request timeout

We configure the backend to delay responses by 3 seconds, then we set the request timeout to 4 seconds. Envoy Gateway will successfully respond to the request.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "4s"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "4s"
curl --header "Host: timeout.example.com" http://${GATEWAY_HOST}/?delay=3s  -I
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 04 Mar 2024 02:34:21 GMT
content-length: 480

Then we set the request timeout to 2 seconds. In this case, Envoy Gateway will respond with a timeout.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "2s"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "2s"
curl --header "Host: timeout.example.com" http://${GATEWAY_HOST}/?delay=3s  -v
*   Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET /?delay=3s HTTP/1.1
> Host: timeout.example.com
> User-Agent: curl/8.6.0
> Accept: */*
>


< HTTP/1.1 504 Gateway Timeout
< content-length: 24
< content-type: text/plain
< date: Mon, 04 Mar 2024 02:35:03 GMT
<
* Connection #0 to host 127.0.0.1 left intact
upstream request timeout

2.15 - HTTP URL Rewrite

HTTPURLRewriteFilter defines a filter that modifies a request during forwarding. At most one of these filters may be used on a Route rule. This MUST NOT be used on the same Route rule as a HTTPRequestRedirect filter.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Rewrite URL Prefix Path

You can configure to rewrite the prefix in the url like below. In this example, any curls to http://${GATEWAY_HOST}/get/xxx will be rewritten to http://${GATEWAY_HOST}/replace/xxx.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplacePrefixMatch
            replacePrefixMatch: /replace
      backendRefs:
      - name: backend
        port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplacePrefixMatch
            replacePrefixMatch: /replace
      backendRefs:
      - name: backend
        port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-url-rewrite -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying http://${GATEWAY_HOST}/get/origin/path should rewrite to http://${GATEWAY_HOST}/replace/origin/path.

$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path"
...
> GET /get/origin/path HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>

< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:03:28 GMT
< content-length: 503
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/replace/origin/path",
 "host": "path.rewrite.example",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Original-Path": [
   "/get/origin/path"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "fd84b842-9937-4fb5-83c7-61470d854b90"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-6fdd4b9bd8-8vlc5"
...

You can see that the X-Envoy-Original-Path is /get/origin/path, but the actual path is /replace/origin/path.

Rewrite URL Full Path

You can configure to rewrite the fullpath in the url like below. In this example, any request sent to http://${GATEWAY_HOST}/get/origin/path/xxxx will be rewritten to http://${GATEWAY_HOST}/force/replace/fullpath.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get/origin/path"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplaceFullPath
            replaceFullPath: /force/replace/fullpath
      backendRefs:
      - name: backend
        port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get/origin/path"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplaceFullPath
            replaceFullPath: /force/replace/fullpath
      backendRefs:
      - name: backend
        port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-url-rewrite -o yaml

Querying http://${GATEWAY_HOST}/get/origin/path/extra should rewrite the request to http://${GATEWAY_HOST}/force/replace/fullpath.

$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path/extra"
...
> GET /get/origin/path/extra HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:09:31 GMT
< content-length: 512
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/force/replace/fullpath",
 "host": "path.rewrite.example",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Original-Path": [
   "/get/origin/path/extra"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "8ab774d6-9ffa-4faa-abbb-f45b0db00895"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-6fdd4b9bd8-8vlc5"
...

You can see that the X-Envoy-Original-Path is /get/origin/path/extra, but the actual path is /force/replace/fullpath.

Rewrite Host Name

You can configure to rewrite the hostname like below. In this example, any requests sent to http://${GATEWAY_HOST}/get with --header "Host: path.rewrite.example" will rewrite host into envoygateway.io.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          hostname: "envoygateway.io"
      backendRefs:
      - name: backend
        port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          hostname: "envoygateway.io"
      backendRefs:
      - name: backend
        port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-url-rewrite -o yaml

Querying http://${GATEWAY_HOST}/get with --header "Host: path.rewrite.example" will rewrite host into envoygateway.io.

$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:15:15 GMT
< content-length: 481
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "envoygateway.io",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Forwarded-Host": [
   "path.rewrite.example"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "39aa447c-97b9-45a3-a675-9fb266ab1af0"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-6fdd4b9bd8-8vlc5"
...

You can see that the X-Forwarded-Host is path.rewrite.example, but the actual host is envoygateway.io.

2.16 - HTTP3

This task will help you get started using HTTP3 using EG. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

TLS Certificates

Generate the certificates and keys used by the Gateway to terminate client TLS connections.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt

Create a certificate and a private key for www.example.com:

openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt

Store the cert/key in a Secret:

kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt

Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443 and references the example-cert Secret:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
        - kind: Secret
          group: ""
          name: example-cert
  '

Apply the following ClientTrafficPolicy to enable HTTP3

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-http3
spec:
  http3: {}
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-http3
spec:
  http3: {}
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Testing

Get the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Query the example app through the Gateway:

The below example uses a custom docker image with custom curl binary with built-in http3.

docker run --net=host --rm ghcr.io/macbre/curl-http3 curl -kv --http3 -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" https://www.example.com/get

It is not possible at the moment to port-forward UDP protocol in kubernetes service check out https://github.com/kubernetes/kubernetes/issues/47862. Hence we need external loadbalancer to test this feature out.

2.17 - HTTPRoute Request Mirroring

The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams. It is possible to divide the traffic between these backends using Traffic Splitting, but it is also possible to mirror requests to another Service instead. Request mirroring is accomplished using Gateway API’s HTTPRequestMirrorFilter on the HTTPRoute.

When requests are made to a HTTPRoute that uses a HTTPRequestMirrorFilter, the response will never come from the backendRef defined in the filter. Responses from the mirror backendRef are always ignored.

Installation

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Mirroring the Traffic

Next, create a new Deployment and Service to mirror requests to. The following example will use a second instance of the application deployed in the quickstart.

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
EOF

Save and apply the following resources to your cluster:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
Then create an HTTPRoute that uses a HTTPRequestMirrorFilter to send requests to the original service from the quickstart, and mirror request to the service that was just deployed.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-mirror -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying backends.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate which pod handled the request. There is only one pod in the deployment for the example app from the quickstart, so it will be the same on all subsequent requests.

$ curl -v --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...

 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-79665566f5-s589f"
...

Check the logs of the pods and you will see that the original deployment and the new deployment each got a request:

$ kubectl logs deploy/backend && kubectl logs deploy/backend-2
...
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:41566)
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:45096)

Multiple BackendRefs

When an HTTPRoute has multiple backendRefs and an HTTPRequestMirrorFilter, traffic splitting will still behave the same as it normally would for the main backendRefs while the backendRef of the HTTPRequestMirrorFilter will continue receiving mirrored copies of the incoming requests.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-3
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-3
      port: 3000

Multiple HTTPRequestMirrorFilters

Multiple HTTPRequestMirrorFilters are not supported on the same HTTPRoute rule. When attempting to do so, the admission webhook will reject the configuration.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-3
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-3
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
Error from server: error when creating "STDIN": admission webhook "validate.gateway.networking.k8s.io" denied the request: spec.rules[0].filters: Invalid value: "RequestMirror": cannot be used multiple times in the same rule

2.18 - HTTPRoute Traffic Splitting

The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams if they match the rules of the HTTPRoute. If an invalid backendRef is configured, then HTTP responses will be returned with status code 500 for all requests that would have been sent to that backend.

Installation

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Single backendRef

When a single backendRef is configured in a HTTPRoute, it will receive 100% of the traffic.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-headers -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying backends.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate which pod handled the request. There is only one pod in the deployment for the example app from the quickstart, so it will be the same on all subsequent requests.

$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-79665566f5-s589f"
...

Multiple backendRefs

If multiple backendRefs are configured, then traffic will be split between the backendRefs equally unless a weight is configured.

First, create a second instance of the example app from the quickstart:

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
EOF

Save and apply the following resources to your cluster:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace

Then create an HTTPRoute that uses both the app from the quickstart and the second instance that was just created

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-2
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-2
      port: 3000

Querying backends.example/get should result in 200 responses from the example Gateway and the output from the example app that indicates which pod handled the request should switch between the first pod and the second one from the new deployment on subsequent requests.

$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-75bcd4c969-lsxpz"
...

Weighted backendRefs

If multiple backendRefs are configured and an un-even traffic split between the backends is desired, then the weight field can be used to control the weight of requests to each backend. If weight is not configured for a backendRef it is assumed to be 1.

The weight field in a backendRef controls the distribution of the traffic split. The proportion of requests to a single backendRef is calculated by dividing its weight by the sum of all backendRef weights in the HTTPRoute. The weight is not a percentage and the sum of all weights does not need to add up to 100.

The HTTPRoute below will configure the gateway to send 80% of the traffic to the backend service, and 20% to the backend-2 service.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 3000
      weight: 2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 3000
      weight: 2

Invalid backendRefs

backendRefs can be considered invalid for the following reasons:

  • The group field is configured to something other than "". Currently, only the core API group (specified by omitting the group field or setting it to an empty string) is supported
  • The kind field is configured to anything other than Service. Envoy Gateway currently only supports Kubernetes Service backendRefs
  • The backendRef configures a service with a namespace not permitted by any existing ReferenceGrants
  • The port field is not configured or is configured to a port that does not exist on the Service
  • The named Service configured by the backendRef cannot be found

Modifying the above example to make the backend-2 backendRef invalid by using a port that does not exist on the Service will result in 80% of the traffic being sent to the backend service, and 20% of the traffic receiving an HTTP response with status code 500.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 9000
      weight: 2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 9000
      weight: 2

Querying backends.example/get should result in 200 responses 80% of the time, and 500 responses 20% of the time.

$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 500 Internal Server Error
< server: envoy
< content-length: 0
<

2.19 - Load Balancing

Envoy load balancing is a way of distributing traffic between multiple hosts within a single upstream cluster in order to effectively make use of available resources.

Envoy Gateway supports the following load balancing policies:

  • Round Robin: a simple policy in which each available upstream host is selected in round robin order.
  • Random: load balancer selects a random available host.
  • Least Request: load balancer uses different algorithms depending on whether hosts have the same or different weights.
  • Consistent Hash: load balancer implements consistent hashing to upstream hosts.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired load balancing polices. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource. If loadBalancer is not specified in BackendTrafficPolicy, the default load balancing policy is Least Request.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

For better testing the load balancer, you can add more hosts in upstream cluster by increasing the replicas of one deployment:

kubectl patch deployment backend -n default -p '{"spec": {"replicas": 4}}'

Install the hey load testing tool

Install the Hey CLI tool, this tool will be used to generate load and measure response times.

Follow the installation instruction from the Hey project docs.

Round Robin

This example will create a Load Balancer with Round Robin policy via BackendTrafficPolicy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: round-robin-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: round-robin-route
  loadBalancer:
    type: RoundRobin
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: round-robin-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /round
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: round-robin-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: round-robin-route
  loadBalancer:
    type: RoundRobin
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: round-robin-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /round
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/round
Summary:
  Total:	0.0487 secs
  Slowest:	0.0440 secs
  Fastest:	0.0181 secs
  Average:	0.0307 secs
  Requests/sec:	2053.1676

  Total data:	50500 bytes
  Size/request:	505 bytes

Response time histogram:
  0.018 [1]	    |■■
  0.021 [2]  	|■■■■
  0.023 [10]	|■■■■■■■■■■■■■■■■■■■■■■
  0.026 [16]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.028 [7]  	|■■■■■■■■■■■■■■■■
  0.031 [10]	|■■■■■■■■■■■■■■■■■■■■■■
  0.034 [17]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.036 [18]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.039 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■
  0.041 [6] 	|■■■■■■■■■■■■■
  0.044 [2]	    |■■■■

As a result, you can see all available upstream hosts receive traffics evenly.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-2gfp7: received 26 requests
backend-69fcff487f-69g8c: received 25 requests
backend-69fcff487f-bqwpr: received 24 requests
backend-69fcff487f-kbn8l: received 25 requests

You should note that this results may vary, the output here is for reference purpose only.

Random

This example will create a Load Balancer with Random policy via BackendTrafficPolicy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: random-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: random-route
  loadBalancer:
    type: Random
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: random-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /random
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: random-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: random-route
  loadBalancer:
    type: Random
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: random-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /random
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 1000 concurrent requests.

hey -n 1000 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/random
Summary:
  Total:	0.2624 secs
  Slowest:	0.0851 secs
  Fastest:	0.0007 secs
  Average:	0.0179 secs
  Requests/sec:	3811.3020

  Total data:	506000 bytes
  Size/request:	506 bytes

Response time histogram:
  0.001 [1] 	|
  0.009 [421]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.018 [219]	|■■■■■■■■■■■■■■■■■■■■■
  0.026 [118]	|■■■■■■■■■■■
  0.034 [64]	|■■■■■■
  0.043 [73]	|■■■■■■■
  0.051 [41]	|■■■■
  0.060 [22]	|■■
  0.068 [19]	|■■
  0.077 [13]	|■
  0.085 [9] 	|■

As a result, you can see all available upstream hosts receive traffics randomly.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-bf6lm: received 246 requests
backend-69fcff487f-gwmqk: received 256 requests
backend-69fcff487f-mzngr: received 230 requests
backend-69fcff487f-xghqq: received 268 requests

You should note that this results may vary, the output here is for reference purpose only.

Least Request

This example will create a Load Balancer with Least Request policy via BackendTrafficPolicy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: least-request-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: least-request-route
  loadBalancer:
    type: LeastRequest
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: least-request-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /least
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: least-request-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: least-request-route
  loadBalancer:
    type: LeastRequest
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: least-request-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /least
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/least
Summary:
  Total:	0.0489 secs
  Slowest:	0.0479 secs
  Fastest:	0.0054 secs
  Average:	0.0297 secs
  Requests/sec:	2045.9317

  Total data:	50500 bytes
  Size/request:	505 bytes

Response time histogram:
  0.005 [1] 	|■■
  0.010 [1] 	|■■
  0.014 [8] 	|■■■■■■■■■■■■■■■
  0.018 [6] 	|■■■■■■■■■■■
  0.022 [11]	|■■■■■■■■■■■■■■■■■■■■
  0.027 [7] 	|■■■■■■■■■■■■■
  0.031 [15]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.035 [13]	|■■■■■■■■■■■■■■■■■■■■■■■■
  0.039 [22]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.044 [12]	|■■■■■■■■■■■■■■■■■■■■■■
  0.048 [4] 	|■■■■■■■

As a result, you can see all available upstream hosts receive traffics randomly, and host backend-69fcff487f-6l2pw receives fewer requests than others.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-59hvs: received 24 requests
backend-69fcff487f-6l2pw: received 19 requests
backend-69fcff487f-ktsx4: received 30 requests
backend-69fcff487f-nqxc7: received 27 requests

If you send one more requests to the ${GATEWAY_HOST}/least, you can tell that host backend-69fcff487f-6l2pw is very likely to get the attention of load balancer and receive this request.

backend-69fcff487f-59hvs: received 24 requests
backend-69fcff487f-6l2pw: received 20 requests
backend-69fcff487f-ktsx4: received 30 requests
backend-69fcff487f-nqxc7: received 27 requests

You should note that this results may vary, the output here is for reference purpose only.

Consistent Hash

This example will create a Load Balancer with Consistent Hash policy via BackendTrafficPolicy.

The underlying consistent hash algorithm that Envoy Gateway utilise is Maglev, and it can derive hash from following aspects:

  • SourceIP
  • Header
  • Cookie

They are also the supported value as consistent hash type.

Source IP

This example will create a Load Balancer with Source IP based Consistent Hash policy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: source-ip-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: source-ip-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: SourceIP
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: source-ip-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /source
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: source-ip-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: source-ip-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: SourceIP
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: source-ip-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /source
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/source
Summary:
  Total:	0.0539 secs
  Slowest:	0.0500 secs
  Fastest:	0.0198 secs
  Average:	0.0340 secs
  Requests/sec:	1856.5666

  Total data:	50600 bytes
  Size/request:	506 bytes

Response time histogram:
  0.020 [1] 	|■■
  0.023 [5] 	|■■■■■■■■■■■
  0.026 [12]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.029 [16]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.032 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■
  0.035 [7] 	|■■■■■■■■■■■■■■■■
  0.038 [8] 	|■■■■■■■■■■■■■■■■■■
  0.041 [18]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.044 [15]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.047 [4] 	|■■■■■■■■■
  0.050 [3] 	|■■■■■■■

As a result, you can see all traffics are routed to only one upstream host, since the client that send requests has the same source IP.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-grzkj: received 0 requests
backend-69fcff487f-n4d8w: received 100 requests
backend-69fcff487f-tb7zx: received 0 requests
backend-69fcff487f-wbzpg: received 0 requests

You can try different client to send out these requests, the upstream host that receives traffics may vary.

This example will create a Load Balancer with Header based Consistent Hash policy.