This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Tasks

Learn Envoy Gateway hands-on through tasks

1 - Quickstart

Get started with Envoy Gateway in a few simple steps.

This “quick start” will help you get started with Envoy Gateway in a few simple steps.

Prerequisites

A Kubernetes cluster.

Note: Refer to the Compatibility Matrix for supported Kubernetes versions.

Note: In case your Kubernetes cluster does not have a LoadBalancer implementation, we recommend installing one so the Gateway resource has an Address associated with it. We recommend using MetalLB.

Note: For Mac user, you need install and run Docker Mac Net Connect to make the Docker network work.

Installation

Install the Gateway API CRDs and Envoy Gateway:

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace

Wait for Envoy Gateway to become available:

kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available

Install the GatewayClass, Gateway, HTTPRoute and example app:

kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/quickstart.yaml -n default

Note: quickstart.yaml defines that Envoy Gateway will listen for traffic on port 80 on its globally-routable IP address, to make it easy to use browsers to test Envoy Gateway. When Envoy Gateway sees that its Listener is using a privileged port (<1024), it will map this internally to an unprivileged port, so that Envoy Gateway doesn’t need additional privileges. It’s important to be aware of this mapping, since you may need to take it into consideration when debugging.

Testing the Configuration

You can also test the same functionality by sending traffic to the External IP. To get the external IP of the Envoy service, run:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

In certain environments, the load balancer may be exposed using a hostname, instead of an IP address. If so, replace ip in the above command with hostname.

Curl the example app through Envoy proxy:

curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8888:80 &

Curl the example app through Envoy proxy:

curl --verbose --header "Host: www.example.com" http://localhost:8888/get

What to explore next?

In this quickstart, you have:

  • Installed Envoy Gateway
  • Deployed a backend service, and a gateway
  • Configured the gateway using Kubernetes Gateway API resources Gateway and HttpRoute to direct incoming requests over HTTP to the backend service.

Here is a suggested list of follow-on tasks to guide you in your exploration of Envoy Gateway:

Review the Tasks section for the scenario matching your use case. The Envoy Gateway tasks are organized by category: traffic management, security, extensibility, observability, and operations.

Clean-Up

Use the steps in this section to uninstall everything from the quickstart.

Delete the GatewayClass, Gateway, HTTPRoute and Example App:

kubectl delete -f https://github.com/envoyproxy/gateway/releases/download/latest/quickstart.yaml --ignore-not-found=true

Delete the Gateway API CRDs and Envoy Gateway:

helm uninstall eg -n envoy-gateway-system

Next Steps

Checkout the Developer Guide to get involved in the project.

2 - Traffic

This section includes Traffic Management tasks.

2.1 - Backend Routing

Envoy Gateway supports routing to native K8s resources such as Service and ServiceImport. The Backend API is a custom Envoy Gateway extension resource that can used in Gateway-API BackendObjectReference.

Motivation

The Backend API was added to support several use cases:

  • Allowing users to integrate Envoy with services (Ext Auth, Rate Limit, ALS, …) using Unix Domain Sockets, which are currently not supported by K8s.
  • Simplify routing to cluster-external backends, which currently requires users to maintain both K8s Service and EndpointSlice resources.

Warning

Similar to the K8s EndpointSlice API, the Backend API can be misused to allow traffic to be sent to otherwise restricted destinations, as described in CVE-2021-25740. A Backend resource can be used to:

  • Expose a Service or Pod that should not be accessible
  • Reference a Service or Pod by a Route without appropriate Reference Grants
  • Expose the Envoy Proxy localhost (including the Envoy admin endpoint)

For these reasons, the Backend API is disabled by default in Envoy Gateway configuration. Envoy Gateway admins are advised to follow upstream recommendations and restrict access to the Backend API using K8s RBAC.

Restrictions

The Backend API is currently supported only in the following BackendReferences:

The Backend API supports attachment the following policies:

Certain restrictions apply on the value of hostnames and addresses. For example, the loopback IP address range and the localhost hostname are forbidden.

Envoy Gateway does not manage the lifecycle of unix domain sockets referenced by the Backend resource. Envoy Gateway admins are responsible for creating and mounting the sockets into the envoy proxy pod. The latter can be achieved by patching the envoy deployment using the EnvoyProxy resource.

Quickstart

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Enable Backend

  • By default Backend is disabled. Lets enable it in the EnvoyGateway startup configuration

  • The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it using a ConfigMap. In the next step, we will update this resource to enable Backend.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableBackend: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableBackend: true    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Testing

Route to External Backend

  • In the following example, we will create a Backend resource that routes to httpbin.org:80 and a HTTPRoute that references this backend.
cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: gateway.envoyproxy.io
          kind: Backend
          name: httpbin
      matches:
        - path:
            type: PathPrefix
            value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: httpbin
  namespace: default
spec:
  endpoints:
    - fqdn:
        hostname: httpbin.org
        port: 80
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: gateway.envoyproxy.io
          kind: Backend
          name: httpbin
      matches:
        - path:
            type: PathPrefix
            value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: httpbin
  namespace: default
spec:
  endpoints:
    - fqdn:
        hostname: httpbin.org
        port: 80

Get the Gateway address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Send a request and view the response:

curl -I -HHost:www.example.com http://${GATEWAY_HOST}/headers

2.2 - Circuit Breakers

Envoy circuit breakers can be used to fail quickly and apply back-pressure in response to upstream service degradation.

Envoy Gateway supports the following circuit breaker thresholds:

  • Concurrent Connections: limit the connections that Envoy can establish to the upstream service. When this threshold is met, new connections will not be established, and some requests will be queued until an existing connection becomes available.
  • Concurrent Requests: limit on concurrent requests in-flight from Envoy to the upstream service. When this threshold is met, requests will be queued.
  • Pending Requests: limit the pending request queue size. When this threshold is met, overflowing requests will be terminated with a 503 status code.

Envoy’s circuit breakers are distributed: counters are not synchronized across different Envoy processes. The default Envoy and Envoy Gateway circuit breaker threshold values (1024) may be too strict for high-throughput systems.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired circuit breaker thresholds. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Note: There are distinct circuit breaker counters for each BackendReference in an xRoute rule. Even if a BackendTrafficPolicy targets a Gateway, each BackendReference in that gateway still has separate circuit breaker counter.

Prerequisites

Install Envoy Gateway

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Install the hey load testing tool

  • The hey CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.

Test and customize circuit breaker settings

This example will simulate a degraded backend that responds within 10 seconds by adding the ?delay=10s query parameter to API calls. The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/?delay=10s
Summary:
  Total:	10.3426 secs
  Slowest:	10.3420 secs
  Fastest:	10.0664 secs
  Average:	10.2145 secs
  Requests/sec:	9.6687

  Total data:	36600 bytes
  Size/request:	366 bytes

Response time histogram:
  10.066 [1]	|■■■■
  10.094 [4]	|■■■■■■■■■■■■■■■
  10.122 [9]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.149 [10]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.177 [10]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.204 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.232 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.259 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.287 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.314 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.342 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■

The default circuit breaker threshold (1024) is not met. As a result, requests do not overflow: all requests are proxied upstream and both Envoy and clients wait for 10s.

In order to fail fast, apply a BackendTrafficPolicy that limits concurrent requests to 10 and pending requests to 0.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: circuitbreaker-for-route
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  circuitBreaker:
    maxPendingRequests: 0
    maxParallelRequests: 10
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: circuitbreaker-for-route
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  circuitBreaker:
    maxPendingRequests: 0
    maxParallelRequests: 10

Execute the load simulation again.

hey -n 100 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/?delay=10s
Summary:
  Total:	10.1230 secs
  Slowest:	10.1224 secs
  Fastest:	0.0529 secs
  Average:	1.0677 secs
  Requests/sec:	9.8785

  Total data:	10940 bytes
  Size/request:	109 bytes

Response time histogram:
  0.053 [1]	|
  1.060 [89]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  2.067 [0]	|
  3.074 [0]	|
  4.081 [0]	|
  5.088 [0]	|
  6.095 [0]	|
  7.102 [0]	|
  8.109 [0]	|
  9.115 [0]	|
  10.122 [10]	|■■■■

With the new circuit breaker settings, and due to the slowness of the backend, only the first 10 concurrent requests were proxied, while the other 90 overflowed.

  • Overflowing Requests failed fast, reducing proxy resource consumption.
  • Upstream traffic was limited, alleviating the pressure on the degraded service.

2.3 - Client Traffic Policy

This task explains the usage of the ClientTrafficPolicy API.

Introduction

The ClientTrafficPolicy API allows system administrators to configure the behavior for how the Envoy Proxy server behaves with downstream clients.

Motivation

This API was added as a new policy attachment resource that can be applied to Gateway resources and it is meant to hold settings for configuring behavior of the connection between the downstream client and Envoy Proxy listener. It is the counterpart to the BackendTrafficPolicy API resource.

Quickstart

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Support TCP keepalive for downstream client

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-tcp-keepalive-policy
  namespace: default
spec:
  targetRef:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  tcpKeepalive:
    idleTime: 20m
    interval: 60s
    probes: 3
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-tcp-keepalive-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  tcpKeepalive:
    idleTime: 20m
    interval: 60s
    probes: 3

Verify that ClientTrafficPolicy is Accepted:

kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default

You should see the policy marked as accepted like this:

NAME                          STATUS     AGE
enable-tcp-keepalive-policy   Accepted   5s

Curl the example app through Envoy proxy once again:

curl --verbose  --header "Host: www.example.com" http://$GATEWAY_HOST/get --next --header "Host: www.example.com" http://$GATEWAY_HOST/get

You should see the output like this:

*   Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 01 Dec 2023 10:17:04 GMT
< content-length: 507
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.1.2"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "4d0d33e8-d611-41f0-9da0-6458eec20fa5"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}* Found bundle for host: 0x7fb9f5204ea0 [serially]
* Can not multiplex, even if we wanted to
* Re-using existing connection #0 with host 172.18.255.202
> GET /headers HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 01 Dec 2023 10:17:04 GMT
< content-length: 511
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/headers",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.1.2"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "9a8874c0-c117-481c-9b04-933571732ca5"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}

You can see keepalive connection marked by the output in:

* Connection #0 to host 172.18.255.202 left intact
* Re-using existing connection #0 with host 172.18.255.202

Enable Proxy Protocol for downstream client

This example configures Proxy Protocol for downstream clients.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-proxy-protocol-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  enableProxyProtocol: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-proxy-protocol-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  enableProxyProtocol: true

Verify that ClientTrafficPolicy is Accepted:

kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default

You should see the policy marked as accepted like this:

NAME                          STATUS     AGE
enable-proxy-protocol-policy   Accepted   5s

Try the endpoint without using PROXY protocol with curl:

curl -v --header "Host: www.example.com" http://$GATEWAY_HOST/get
*   Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

Curl the example app through Envoy proxy once again, now sending HAProxy PROXY protocol header at the beginning of the connection with –haproxy-protocol flag:

curl --verbose --haproxy-protocol --header "Host: www.example.com" http://$GATEWAY_HOST/get

You should now expect 200 response status and also see that source IP was preserved in the X-Forwarded-For header.

*   Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Mon, 04 Dec 2023 21:11:43 GMT
< content-length: 510
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.1.2"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "192.168.255.6"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "290e4b61-44b7-4e5c-a39c-0ec76784e897"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}

Configure Client IP Detection

This example configures the number of additional ingress proxy hops from the right side of XFF HTTP headers to trust when determining the origin client’s IP address and determines whether or not x-forwarded-proto headers will be trusted. Refer to https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-for for details.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: http-client-ip-detection
  namespace: default
spec:
  targetRef:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  clientIPDetection:
    xForwardedFor:
      numTrustedHops: 2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: http-client-ip-detection
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  clientIPDetection:
    xForwardedFor:
      numTrustedHops: 2

Verify that ClientTrafficPolicy is Accepted:

kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default

You should see the policy marked as accepted like this:

NAME                          STATUS     AGE
http-client-ip-detection   Accepted   5s

Open port-forward to the admin interface port:

kubectl port-forward deploy/${ENVOY_DEPLOYMENT} -n envoy-gateway-system 19000:19000

Curl the admin interface port to fetch the configured value for xff_num_trusted_hops:

curl -s 'http://localhost:19000/config_dump?resource=dynamic_listeners' \
  | jq -r '.configs[0].active_state.listener.default_filter_chain.filters[0].typed_config 
      | with_entries(select(.key | match("xff|remote_address|original_ip")))'

You should expect to see the following:

{
  "use_remote_address": true,
  "xff_num_trusted_hops": 2
}

Curl the example app through Envoy proxy:

curl -v http://$GATEWAY_HOST/get \
  -H "Host: www.example.com" \
  -H "X-Forwarded-Proto: https" \
  -H "X-Forwarded-For: 1.1.1.1,2.2.2.2"

You should expect 200 response status, see that X-Forwarded-Proto was preserved and X-Envoy-External-Address was set to the leftmost address in the X-Forwarded-For header:

*   Trying [::1]:8888...
* Connected to localhost (::1) port 8888
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> X-Forwarded-Proto: https
> X-Forwarded-For: 1.1.1.1,2.2.2.2
> 
Handling connection for 8888
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Tue, 30 Jan 2024 15:19:22 GMT
< content-length: 535
< x-envoy-upstream-service-time: 0
< server: envoy
< 
{
 "path": "/get",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.4.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-External-Address": [
   "1.1.1.1"
  ],
  "X-Forwarded-For": [
   "1.1.1.1,2.2.2.2,10.244.0.9"
  ],
  "X-Forwarded-Proto": [
   "https"
  ],
  "X-Request-Id": [
   "53ccfad7-1899-40fa-9322-ddb833aa1ac3"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-58d58f745-8psnc"
* Connection #0 to host localhost left intact
}

Enable HTTP Request Received Timeout

This feature allows you to limit the time taken by the Envoy Proxy fleet to receive the entire request from the client, which is useful in preventing certain clients from consuming too much memory in Envoy This example configures the HTTP request timeout for the client, please check out the details here.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      requestReceivedTimeout: 2s
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      requestReceivedTimeout: 2s

Curl the example app through Envoy proxy:

curl -v http://$GATEWAY_HOST/get \
  -H "Host: www.example.com" \
  -H "Content-Length: 10000"

You should expect 428 response status after 2s:

curl -v http://$GATEWAY_HOST/get \
  -H "Host: www.example.com" \
  -H "Content-Length: 10000"
*   Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> Content-Length: 10000
>
< HTTP/1.1 408 Request Timeout
< content-length: 15
< content-type: text/plain
< date: Tue, 27 Feb 2024 07:38:27 GMT
< connection: close
<
* Closing connection
request timeout

Configure Client HTTP Idle Timeout

The idle timeout is defined as the period in which there are no active requests. When the idle timeout is reached the connection will be closed. For more details see here.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      idleTimeout: 5s
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  timeout:
    http:
      idleTimeout: 5s

Curl the example app through Envoy proxy:

openssl s_client -crlf -connect $GATEWAY_HOST:443

You should expect the connection to be closed after 5s.

You can also check the number of connections closed due to idle timeout by using the following query:

envoy_http_downstream_cx_idle_timeout{envoy_http_conn_manager_prefix="<name of connection manager>"} 

The number of connections closed due to idle timeout should be increased by 1.

Configure Downstream Per Connection Buffer Limit

This feature allows you to set a soft limit on size of the listener’s new connection read and write buffers. The size is configured using the resource.Quantity format, see examples here.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    bufferLimit: 1024
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: client-timeout
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    bufferLimit: 1024

2.4 - Connection Limit

The connection limit features allows users to limit the number of concurrently active TCP connections on a Gateway or a Listener. When the connection limit is reached, new connections are closed immediately by Envoy proxy. It’s possible to configure a delay for connection rejection.

Users may want to limit the number of connections for several reasons:

  • Protect resources like CPU and Memory.
  • Ensure that different listeners can receive a fair share of global resources.
  • Protect from malicious activity like DoS attacks.

Envoy Gateway introduces a new CRD called Client Traffic Policy that allows the user to describe their desired connection limit settings. This instantiated resource can be linked to a Gateway.

The Envoy connection limit implementation is distributed: counters are not synchronized between different envoy proxies.

When a Client Traffic Policy is attached to a gateway, the connection limit will apply differently based on the Listener protocol in use:

  • HTTP: all HTTP listeners in a Gateway will share a common connection counter, and a limit defined by the policy.
  • HTTPS/TLS: each HTTPS/TLS listener will have a dedicated connection counter, and a limit defined by the policy.

Prerequisites

Install Envoy Gateway

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Install the hey load testing tool

  • The hey CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.

Test and customize connection limit settings

This example we use hey to open 10 connections and execute 1 RPS per connection for 10 seconds.

hey -c 10 -q 1 -z 10s  -host "www.example.com" http://${GATEWAY_HOST}/get
Summary:
  Total:	10.0058 secs
  Slowest:	0.0275 secs
  Fastest:	0.0029 secs
  Average:	0.0111 secs
  Requests/sec:	9.9942

[...]

Status code distribution:
  [200]	100 responses

There are no connection limits, and so all 100 requests succeed.

Next, we apply a limit of 5 connections.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: connection-limit-ctp
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    connectionLimit:
      value: 5    
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: connection-limit-ctp
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  connection:
    connectionLimit:
      value: 5    

Execute the load simulation again.

hey -c 10 -q 1 -z 10s  -host "www.example.com" http://${GATEWAY_HOST}/get
Summary:
  Total:	11.0327 secs
  Slowest:	0.0361 secs
  Fastest:	0.0013 secs
  Average:	0.0088 secs
  Requests/sec:	9.0640

[...] 

Status code distribution:
  [200]	50 responses

Error distribution:
  [50]	Get "http://localhost:8888/get": EOF

With the new connection limit, only 5 of 10 connections are established, and so only 50 requests succeed.

2.5 - Direct Response

Direct responses are valuable in cases where you want the gateway itself to handle certain requests without forwarding them to backend services. This task shows you how to configure them.

Installation

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Testing Direct Response

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: direct-response
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"    
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /inline
    filters:
    - type: ExtensionRef
      extensionRef:
        group: gateway.envoyproxy.io
        kind: HTTPRouteFilter
        name: direct-response-inline
  - matches:
    - path:
        type: PathPrefix
        value: /value-ref
    filters:
    - type: ExtensionRef
      extensionRef:
        group: gateway.envoyproxy.io
        kind: HTTPRouteFilter
        name: direct-response-value-ref
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: value-ref-response
data:
  response.body: '{"error": "Internal Server Error"}'
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: direct-response-inline
spec:
  directResponse:
    contentType: text/plain
    statusCode: 503
    body:
      type: Inline
      inline: "Oops! Your request is not found."
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: direct-response-value-ref
spec:
  directResponse:
    contentType: application/json
    statusCode: 500
    body:
      type: ValueRef
      valueRef:
        group: ""
        kind: ConfigMap
        name: value-ref-response
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: direct-response
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"    
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /inline
    filters:
    - type: ExtensionRef
      extensionRef:
        group: gateway.envoyproxy.io
        kind: HTTPRouteFilter
        name: direct-response-inline
  - matches:
    - path:
        type: PathPrefix
        value: /value-ref
    filters:
    - type: ExtensionRef
      extensionRef:
        group: gateway.envoyproxy.io
        kind: HTTPRouteFilter
        name: direct-response-value-ref
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: value-ref-response
data:
  response.body: '{"error": "Internal Server Error"}'
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: direct-response-inline
spec:
  directResponse:
    contentType: text/plain
    statusCode: 503
    body:
      type: Inline
      inline: "Oops! Your request is not found."
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: direct-response-value-ref
spec:
  directResponse:
    contentType: application/json
    statusCode: 500
    body:
      type: ValueRef
      valueRef:
        group: ""
        kind: ConfigMap
        name: value-ref-response
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/inline
*   Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET /inline HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> 
< HTTP/1.1 503 Service Unavailable
< content-type: text/plain
< content-length: 32
< date: Sat, 02 Nov 2024 00:35:48 GMT
< 
* Connection #0 to host 127.0.0.1 left intact
Oops! Your request is not found.
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/value-ref
*   Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET /value-ref HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> 
< HTTP/1.1 500 Internal Server Error
< content-type: application/json
< content-length: 34
< date: Sat, 02 Nov 2024 00:35:55 GMT
< 
* Connection #0 to host 127.0.0.1 left intact
{"error": "Internal Server Error"}

2.6 - Failover

Active-passive failover in an API gateway setup is like having a backup plan in place to keep things running smoothly if something goes wrong. Here’s why it’s valuable:

  • Staying Online: When the main (or “active”) backend has issues or goes offline, the fallback (or “passive”) backend is ready to step in instantly. This helps keep your API accessible and your services running, so users don’t even notice any interruptions.

  • Automatic Switch Over: If a problem occurs, the system can automatically switch traffic over to the fallback backend. This avoids needing someone to jump in and fix things manually, which could take time and might even lead to mistakes.

  • Lower Costs: In an active-passive setup, the fallback backend doesn’t need to work all the time—it’s just on standby. This can save on costs (like cloud egress costs) compared to setups where both backend are running at full capacity.

  • Peace of Mind with Redundancy: Although the fallback backend isn’t handling traffic daily, it’s there as a safety net. If something happens with the primary backend, the backup can take over immediately, ensuring your service doesn’t skip a beat.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Test

  • We’ll first create two services & deployments, called active and passive, representing an active and passive backend application.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: active 
  labels:
    app: active
    service: active
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: active
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: active
spec:
  replicas: 1
  selector:
    matchLabels:
      app: active
      version: v1
  template:
    metadata:
      labels:
        app: active
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: active 
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
  name: passive 
  labels:
    app: passive
    service: passive
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: passive
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: passive
spec:
  replicas: 1
  selector:
    matchLabels:
      app: passive
      version: v1
  template:
    metadata:
      labels:
        app: passive
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: passive 
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
EOF

Save and apply the following resource to your cluster:

apiVersion: v1
kind: Service
metadata:
  name: active 
  labels:
    app: active
    service: active
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: active
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: active
spec:
  replicas: 1
  selector:
    matchLabels:
      app: active
      version: v1
  template:
    metadata:
      labels:
        app: active
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: active 
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
  name: passive 
  labels:
    app: passive
    service: passive
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: passive
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: passive
spec:
  replicas: 1
  selector:
    matchLabels:
      app: passive
      version: v1
  template:
    metadata:
      labels:
        app: passive
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: passive 
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
  • Follow the instructions here to enable the Backend API

  • Create two Backend resources that are used to represent the active backend and passive backend. Note, we’ve set fallback: true for the passive backend to indicate its a passive backend

cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: passive 
spec:
  fallback: true
  endpoints:
    - fqdn:
        hostname: passive.default.svc.cluster.local
        port: 3000 
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: active
spec:
  endpoints:
  - fqdn:
      hostname: active.default.svc.cluster.local 
      port: 3000
---
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: passive 
spec:
  fallback: true
  endpoints:
    - fqdn:
        hostname: passive.default.svc.cluster.local
        port: 3000 
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: active
spec:
  endpoints:
  - fqdn:
      hostname: active.default.svc.cluster.local 
      port: 3000
---
  • Lets create an HTTPRoute that can route to both these backends
cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: ha-example
  namespace: default
spec:
  hostnames:
  - www.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg 
    namespace: default
  rules:
  - backendRefs:
    - group: gateway.envoyproxy.io
      kind: Backend
      name: active
      namespace: default
      port: 3000
    - group: gateway.envoyproxy.io
      kind: Backend
      name: passive 
      namespace: default
      port: 3000
    matches:
    - path:
        type: PathPrefix
        value: /test
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: ha-example
  namespace: default
spec:
  hostnames:
  - www.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg 
    namespace: default
  rules:
  - backendRefs:
    - group: gateway.envoyproxy.io
      kind: Backend
      name: active
      namespace: default
      port: 3000
    - group: gateway.envoyproxy.io
      kind: Backend
      name: passive 
      namespace: default
      port: 3000
    matches:
    - path:
        type: PathPrefix
        value: /test
  • Lets configure a BackendTrafficPolicy with a passive health check setting to detect an transient errors.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: passive-health-check
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: ha-example 
  healthCheck:
    passive:
      baseEjectionTime: 10s
      interval: 2s
      maxEjectionPercent: 100
      consecutive5XxErrors: 1 
      consecutiveGatewayErrors: 0
      consecutiveLocalOriginFailures: 1
      splitExternalLocalOriginErrors: false
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: passive-health-check
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: ha-example 
  healthCheck:
    passive:
      baseEjectionTime: 10s
      interval: 2s
      maxEjectionPercent: 100
      consecutive5XxErrors: 1 
      consecutiveGatewayErrors: 0
      consecutiveLocalOriginFailures: 1
      splitExternalLocalOriginErrors: false
  • Lets send 10 requests. You should see that they all go to the active backend.
for i in {1..10; do curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/test 2>/dev/null | jq .pod; done
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
  • Lets simulate a failure in the active backend by changing the server listening port to 5000
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: active
spec:
  replicas: 1
  selector:
    matchLabels:
      app: active
      version: v1
  template:
    metadata:
      labels:
        app: active
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: active 
          ports:
            - containerPort: 3000
          env:
            - name: HTTP_PORT
              value: "5000"
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
EOF

Save and apply the following resource to your cluster:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: active
spec:
  replicas: 1
  selector:
    matchLabels:
      app: active
      version: v1
  template:
    metadata:
      labels:
        app: active
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: active 
          ports:
            - containerPort: 3000
          env:
            - name: HTTP_PORT
              value: "5000"
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
  • Lets send 10 requests again. You should see them all being sent to the passive backend
for i in {1..10; do curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/test 2>/dev/null | jq .pod; done
parse error: Invalid numeric literal at line 1, column 9
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"

The first error can be avoided by configuring retries.

2.7 - Fault Injection

Envoy fault injection can be used to inject delays and abort requests to mimic failure scenarios such as service failures and overloads.

Envoy Gateway supports the following fault scenarios:

  • delay fault: inject a custom fixed delay into the request with a certain probability to simulate delay failures.
  • abort fault: inject a custom response code into the response with a certain probability to simulate abort failures.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired fault scenarios. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

For GRPC - follow the steps from the GRPC Routing example.

Install the hey load testing tool

  • The hey CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.

Configuration

Allow requests with a valid faultInjection by creating an BackendTrafficPolicy and attaching it to the example HTTPRoute or GRPCRoute.

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-50-percent-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: foo
  faultInjection:
    abort:
      httpStatus: 501
      percentage: 50
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-delay
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: bar
  faultInjection:
    delay:
      fixedDelay: 2s
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: foo
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: bar
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /bar
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-50-percent-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: foo
  faultInjection:
    abort:
      httpStatus: 501
      percentage: 50
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-delay
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: bar
  faultInjection:
    delay:
      fixedDelay: 2s
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: foo
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: bar
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /bar

Two HTTPRoute resources were created, one for /foo and another for /bar. fault-injection-abort BackendTrafficPolicy has been created and targeted HTTPRoute foo to abort requests for /foo. fault-injection-delay BackendTrafficPolicy has been created and targeted HTTPRoute foo to delay 2s requests for /bar.

Verify the HTTPRoute configuration and status:

kubectl get httproute/foo -o yaml
kubectl get httproute/bar -o yaml

Verify the BackendTrafficPolicy configuration:

kubectl get backendtrafficpolicy/fault-injection-50-percent-abort -o yaml
kubectl get backendtrafficpolicy/fault-injection-delay -o yaml

GRPCRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: GRPCRoute
    name: yages
  faultInjection:
    abort:
      grpcStatus: 14
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
  - name: example-gateway
  hostnames:
  - "grpc-example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: yages
      port: 9000
      weight: 1
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: fault-injection-abort
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: GRPCRoute
    name: yages
  faultInjection:
    abort:
      grpcStatus: 14
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
  - name: example-gateway
  hostnames:
  - "grpc-example.com"
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: yages
      port: 9000
      weight: 1

A BackendTrafficPolicy has been created and targeted GRPCRoute yages to abort requests for yages service..

Verify the GRPCRoute configuration and status:

kubectl get grpcroute/yages -o yaml

Verify the SecurityPolicy configuration:

kubectl get backendtrafficpolicy/fault-injection-abort -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

HTTPRoute

Verify that requests to foo route are aborted.

hey -n 1000 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/foo
Status code distribution:
  [200]	501 responses
  [501]	499 responses

Verify that requests to bar route are delayed.

hey -n 1000 -c 100 -host "www.example.com"  http://${GATEWAY_HOST}/bar
Summary:
  Total:	20.1493 secs
  Slowest:	2.1020 secs
  Fastest:	1.9940 secs
  Average:	2.0123 secs
  Requests/sec:	49.6295

  Total data:	557000 bytes
  Size/request:	557 bytes

Response time histogram:
  1.994 [1]	|
  2.005 [475]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  2.016 [419]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  2.026 [5]	|
  2.037 [0]	|
  2.048 [0]	|
  2.059 [30]	|■■■
  2.070 [0]	|
  2.080 [0]	|
  2.091 [11]	|■
  2.102 [59]	|■■■■■

GRPCRoute

Verify that requests to yagesservice are aborted.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

You should see the below response

Error invoking method "yages.Echo/Ping": rpc error: code = Unavailable desc = failed to query for service descriptor "yages.Echo": fault filter abort

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the BackendTrafficPolicy:

kubectl delete BackendTrafficPolicy/fault-injection-abort

2.8 - Gateway Address

The Gateway API provides an optional Addresses field through which Envoy Gateway can set addresses for Envoy Proxy Service. Depending on the Service Type, the addresses of gateway can be used as:

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

External IPs

Using the addresses in Gateway.Spec.Addresses as the External IPs of Envoy Proxy Service, this will require the address to be of type IPAddress and the ServiceType to be of LoadBalancer or NodePort.

The Envoy Gateway deploys Envoy Proxy Service as LoadBalancer by default, so you can set the address of the Gateway directly (the address settings here are for reference only):

kubectl patch gateway eg --type=json --patch '
- op: add
  path: /spec/addresses
  value:
   - type: IPAddress
     value: 1.2.3.4
'

Verify the Gateway status:

kubectl get gateway
NAME   CLASS   ADDRESS   PROGRAMMED   AGE
eg     eg      1.2.3.4   True         14m

Verify the Envoy Proxy Service status:

kubectl get service -n envoy-gateway-system
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
envoy-default-eg-64656661       LoadBalancer   10.96.236.219   1.2.3.4       80:31017/TCP   15m
envoy-gateway                   ClusterIP      10.96.192.76    <none>        18000/TCP      15m
envoy-gateway-metrics-service   ClusterIP      10.96.124.73    <none>        8443/TCP       15m

Note: If the Gateway.Spec.Addresses is explicitly set, it will be the only addresses that populates the Gateway status.

Cluster IP

Using the addresses in Gateway.Spec.Addresses as the Cluster IP of Envoy Proxy Service, this will require the address to be of type IPAddress and the ServiceType to be of ClusterIP.

2.9 - Gateway API Support

As mentioned in the system design document, Envoy Gateway’s managed data plane is configured dynamically through Kubernetes resources, primarily Gateway API objects. Envoy Gateway supports configuration using the following Gateway API resources.

GatewayClass

A GatewayClass represents a “class” of gateways, i.e. which Gateways should be managed by Envoy Gateway. Envoy Gateway supports managing a single GatewayClass resource that matches its configured controllerName and follows Gateway API guidelines for resolving conflicts when multiple GatewayClasses exist with a matching controllerName.

Note: If specifying GatewayClass parameters reference, it must refer to an EnvoyProxy resource.

Gateway

When a Gateway resource is created that references the managed GatewayClass, Envoy Gateway will create and manage a new Envoy Proxy deployment. Gateway API resources that reference this Gateway will configure this managed Envoy Proxy deployment.

HTTPRoute

A HTTPRoute configures routing of HTTP traffic through one or more Gateways. The following HTTPRoute filters are supported by Envoy Gateway:

  • requestHeaderModifier: RequestHeaderModifiers can be used to modify or add request headers before the request is proxied to its destination.
  • responseHeaderModifier: ResponseHeaderModifiers can be used to modify or add response headers before the response is sent back to the client.
  • requestMirror: RequestMirrors configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.
  • requestRedirect: RequestRedirects configure policied for how requests that match the HTTPRoute should be modified and then redirected.
  • urlRewrite: UrlRewrites allow for modification of the request’s hostname and path before it is proxied to its destination.
  • extensionRef: ExtensionRefs are used by Envoy Gateway to implement extended filters. Currently, Envoy Gateway supports rate limiting and request authentication filters. For more information about these filters, refer to the rate limiting and request authentication documentation.

Notes:

  • The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other destinations such as arbitrary URLs is not possible.
  • Only requestHeaderModifier and responseHeaderModifier filters are currently supported within HTTPBackendRef.

TCPRoute

A TCPRoute configures routing of raw TCP traffic through one or more Gateways. Traffic can be forwarded to the desired BackendRefs based on a TCP port number.

Note: A TCPRoute only supports proxying in non-transparent mode, i.e. the backend will see the source IP and port of the Envoy Proxy instance instead of the client.

UDPRoute

A UDPRoute configures routing of raw UDP traffic through one or more Gateways. Traffic can be forwarded to the desired BackendRefs based on a UDP port number.

Note: Similar to TCPRoutes, UDPRoutes only support proxying in non-transparent mode i.e. the backend will see the source IP and port of the Envoy Proxy instance instead of the client.

GRPCRoute

A GRPCRoute configures routing of gRPC requests through one or more Gateways. They offer request matching by hostname, gRPC service, gRPC method, or HTTP/2 Header. Envoy Gateway supports the following filters on GRPCRoutes to provide additional traffic processing:

  • requestHeaderModifier: RequestHeaderModifiers can be used to modify or add request headers before the request is proxied to its destination.
  • responseHeaderModifier: ResponseHeaderModifiers can be used to modify or add response headers before the response is sent back to the client.
  • requestMirror: RequestMirrors configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.

Notes:

  • The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other destinations such as arbitrary URLs is not currently possible.
  • Only requestHeaderModifier and responseHeaderModifier filters are currently supported within GRPCBackendRef.

TLSRoute

A TLSRoute configures routing of TCP traffic through one or more Gateways. However, unlike TCPRoutes, TLSRoutes can match against TLS-specific metadata.

ReferenceGrant

A ReferenceGrant is used to allow a resource to reference another resource in a different namespace. Normally an HTTPRoute created in namespace foo is not allowed to reference a Service in namespace bar. A ReferenceGrant permits these types of cross-namespace references. Envoy Gateway supports the following ReferenceGrant use-cases:

  • Allowing an HTTPRoute, GRPCRoute, TLSRoute, UDPRoute, or TCPRoute to reference a Service in a different namespace.
  • Allowing an HTTPRoute’s requestMirror filter to include a BackendRef that references a Service in a different namespace.
  • Allowing a Gateway’s SecretObjectReference to reference a secret in a different namespace.

2.10 - Global Rate Limit

Rate limit is a feature that allows the user to limit the number of incoming requests to a predefined value based on attributes within the traffic flow.

Here are some reasons why you may want to implement Rate limits

  • To prevent malicious activity such as DDoS attacks.
  • To prevent applications and its resources (such as a database) from getting overloaded.
  • To create API limits based on user entitlements.

Envoy Gateway supports two types of rate limiting: Global rate limiting and Local rate limiting.

Global rate limiting applies a shared rate limit to the traffic flowing through all the instances of Envoy proxies where it is configured. i.e. if the data plane has 2 replicas of Envoy running, and the rate limit is 10 requests/second, this limit is shared and will be hit if 5 requests pass through the first replica and 5 requests pass through the second replica within the same second.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their rate limit intent. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Note: Limit is applied per route. Even if a BackendTrafficPolicy targets a gateway, each route in that gateway still has a separate rate limit bucket. For example, if a gateway has 2 routes, and the limit is 100r/s, then each route has its own 100r/s rate limit bucket.

Prerequisites

Install Envoy Gateway

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Install Redis

  • The global rate limit feature is based on Envoy Ratelimit which requires a Redis instance as its caching layer. Lets install a Redis deployment in the redis-system namespce.
cat <<EOF | kubectl apply -f -
kind: Namespace
apiVersion: v1
metadata:
  name: redis-system 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: redis-system
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:6.0.6
        imagePullPolicy: IfNotPresent
        name: redis
        resources:
          limits:
            cpu: 1500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: redis-system 
  labels:
    app: redis
  annotations:
spec:
  ports:
  - name: redis
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: redis
EOF

Save and apply the following resources to your cluster:

---
kind: Namespace
apiVersion: v1
metadata:
  name: redis-system 
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: redis-system
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - image: redis:6.0.6
        imagePullPolicy: IfNotPresent
        name: redis
        resources:
          limits:
            cpu: 1500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  namespace: redis-system 
  labels:
    app: redis
  annotations:
spec:
  ports:
  - name: redis
    port: 6379
    protocol: TCP
    targetPort: 6379
  selector:
    app: redis

Enable Global Rate limit in Envoy Gateway

  • The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it using a ConfigMap. In the next step, we will update this resource to enable rate limit in Envoy Gateway as well as configure the URL for the Redis instance used for Global rate limiting.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Rate Limit Specific User

Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id header with a value set to one.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
        limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
        limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-ratelimit -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Let’s query ratelimit.example/get 4 times. We should receive a 200 response from the example Gateway for the first 3 requests and then receive a 429 status code for the 4th request since the limit is set at 3 requests/Hour for the request which contains the header x-user-id and value one.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

You should be able to send requests with the x-user-id header and a different value and receive successful responses from the server.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:36 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:37 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:38 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:39 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

Rate Limit Distinct Users Except Admin

Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the value in the x-user-id header. Here, user one (recognised from the traffic flow using the header x-user-id and value one) will be rate limited at 3 requests/hour and so will user two (recognised from the traffic flow using the header x-user-id and value two). But if x-user-id is admin, it will not be rate limited even beyond 3 requests/hour.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - type: Distinct
            name: x-user-id
          - name: x-user-id
            value: admin
            invert: true
        limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - type: Distinct
            name: x-user-id
        limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

Lets run the same command again with the header x-user-id and value one set in the request. We should the first 3 requests succeeding and the 4th request being rate limited.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

You should see the same behavior when the value for header x-user-id is set to two and 4 requests are sent.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

But when the value for header x-user-id is set to admin and 4 requests are sent, all 4 of them should respond with 200 OK.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: admin" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

Rate Limit All Requests

This example shows you how to rate limit all requests matching the HTTPRoute rule at 3 requests/Hour by leaving the clientSelectors field unset.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

Rate Limit Client IP Addresses

Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on their IP address (also reflected in the X-Forwarded-For header).

Note: EG supports two kinds of rate limit for the IP address: Exact and Distinct.

  • Exact means that all IP addresses within the specified Source IP CIDR share the same rate limit bucket.
  • Distinct means that each IP address within the specified Source IP CIDR has its own rate limit bucket.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - sourceCIDR: 
            value: 0.0.0.0/0
            type: Distinct
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - sourceCIDR: 
            value: 0.0.0.0/0
            type: Distinct
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:45 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:46 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:48 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Tue, 28 Mar 2023 08:28:48 GMT
server: envoy
transfer-encoding: chunked

Rate Limit Jwt Claims

Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the value of the Jwt claims carried.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example
  jwt:
    providers:
    - name: example
      remoteJWKS:
        uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
      claimToHeaders:
      - claim: name
        header: x-claim-name
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example 
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-claim-name
            value: John Doe
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: example
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example
  jwt:
    providers:
    - name: example
      remoteJWKS:
        uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
      claimToHeaders:
      - claim: name
        header: x-claim-name
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: example 
  rateLimit:
    type: Global
    global:
      rules:
      - clientSelectors:
        - headers:
          - name: x-claim-name
            value: John Doe
        limit:
          requests: 3
          unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: example
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /foo

Get the JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
TOKEN1=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/with-different-claim.jwt -s) && echo "$TOKEN1" | cut -d '.' -f2 - | base64 --decode

Rate limit by carrying TOKEN

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:25 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy


HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:26 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy


HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:27 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy


HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Mon, 12 Jun 2023 12:00:28 GMT
server: envoy
transfer-encoding: chunked

No Rate Limit by carrying TOKEN1

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN1" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:34 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:35 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:36 GMT
content-length: 556
x-envoy-upstream-service-time: 1
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:37 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy

(Optional) Editing Kubernetes Resources settings for the Rate Limit Service

  • The default installation of Envoy Gateway installs a default EnvoyGateway configuration and provides the initial rate limit kubernetes resources settings. such as replicas is 1, requests resources cpu is 100m, memory is 512Mi. the others like container image, securityContext, env and pod annotations and securityContext can be modified by modifying the ConfigMap.

  • tls.certificateRef set the client certificate for redis server TLS connections.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
      kubernetes:
        rateLimitDeployment:
          replicas: 1
          container:
            image: envoyproxy/ratelimit:master
            env:
            - name: CACHE_KEY_PREFIX
              value: "eg:rl:"
            resources:
              requests:
                cpu: 100m
                memory: 512Mi
            securityContext:
              runAsUser: 2000
              allowPrivilegeEscalation: false
          pod:
            annotations:
              key1: val1
              key2: val2
            securityContext:
              runAsUser: 1000
              runAsGroup: 3000
              fsGroup: 2000
              fsGroupChangePolicy: "OnRootMismatch"
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379
          tls:
            certificateRef:
              name: ratelimit-cert
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
      kubernetes:
        rateLimitDeployment:
          replicas: 1
          container:
            image: envoyproxy/ratelimit:master
            env:
            - name: CACHE_KEY_PREFIX
              value: "eg:rl:"
            resources:
              requests:
                cpu: 100m
                memory: 512Mi
            securityContext:
              runAsUser: 2000
              allowPrivilegeEscalation: false
          pod:
            annotations:
              key1: val1
              key2: val2
            securityContext:
              runAsUser: 1000
              runAsGroup: 3000
              fsGroup: 2000
              fsGroupChangePolicy: "OnRootMismatch"
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis.redis-system.svc.cluster.local:6379
          tls:
            certificateRef:
              name: ratelimit-cert    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

2.11 - GRPC Routing

The GRPCRoute resource allows users to configure gRPC routing by matching HTTP/2 traffic and forwarding it to backend gRPC servers. To learn more about gRPC routing, refer to the Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Installation

Install the gRPC routing example resources:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/grpc-routing.yaml

The manifest installs a GatewayClass, Gateway, a Deployment, a Service, and a GRPCRoute resource. The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.

Note: Envoy Gateway is configured by default to manage a GatewayClass with controllerName: gateway.envoyproxy.io/gatewayclass-controller.

Verification

Check the status of the GatewayClass:

kubectl get gc --selector=example=grpc-routing

The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.

A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is provisioned or configured by Envoy Gateway. The gatewayClassName defines the name of a GatewayClass used by this Gateway. Check the status of the Gateway:

kubectl get gateways --selector=example=grpc-routing

The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also provides the address of the Gateway. This address is used later to test connectivity to proxied backend services.

Check the status of the GRPCRoute:

kubectl get grpcroutes --selector=example=grpc-routing -o yaml

The status for the GRPCRoute should surface “Accepted=True” and a parentRef that references the example Gateway. The example-route matches any traffic for “grpc-example.com” and forwards it to the “yages” Service.

Testing the Configuration

Before testing GRPC routing to the yages backend, get the Gateway’s address.

export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')

Test GRPC routing to the yages backend using the grpcurl command.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

You should see the below response

{
  "text": "pong"
}

Envoy Gateway also supports gRPC-Web requests for this configuration. The below curl command can be used to send a grpc-Web request with over HTTP/2. You should receive the same response seen in the previous command.

The data in the body AAAAAAA= is a base64 encoded representation of an empty message (data length 0) that the Ping RPC accepts.

curl --http2-prior-knowledge -s ${GATEWAY_HOST}:80/yages.Echo/Ping -H 'Host: grpc-example.com'   -H 'Content-Type: application/grpc-web-text'   -H 'Accept: application/grpc-web-text' -XPOST -d'AAAAAAA=' | base64 -d

GRPCRoute Match

The matches field can be used to restrict the route to a specific set of requests based on GRPC’s service and/or method names. It supports two match types: Exact and RegularExpression.

Exact

Exact match is the default match type.

The following example shows how to match a request based on the service and method names for grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo, as well as a match for all services with a method name Ping which matches yages.Echo/Ping in our deployment.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: Ping
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: Ping
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1

Verify the GRPCRoute status:

kubectl get grpcroutes --selector=example=grpc-routing -o yaml

Test GRPC routing to the yages backend using the grpcurl command.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

RegularExpression

The following example shows how to match a request based on the service and method names with match type RegularExpression. It matches all the services and methods with pattern /.*.Echo/Pin.+, which matches yages.Echo/Ping in our deployment.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: "Pin.+"
          service: ".*.Echo"
          type: RegularExpression
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
  name: yages
  labels:
    example: grpc-routing
spec:
  parentRefs:
    - name: example-gateway
  hostnames:
    - "grpc-example.com"
  rules:
    - matches:
      - method:
          method: ServerReflectionInfo
          service: grpc.reflection.v1alpha.ServerReflection
      - method:
          method: "Pin.+"
          service: ".*.Echo"
          type: RegularExpression
      backendRefs:
        - group: ""
          kind: Service
          name: yages
          port: 9000
          weight: 1

Verify the GRPCRoute status:

kubectl get grpcroutes --selector=example=grpc-routing -o yaml

Test GRPC routing to the yages backend using the grpcurl command.

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

2.12 - HTTP Redirects

The HTTPRoute resource can issue redirects to clients or rewrite paths sent upstream using filters. Note that HTTPRoute rules cannot use both filter types at once. Currently, Envoy Gateway only supports core HTTPRoute filters which consist of RequestRedirect and RequestHeaderModifier at the time of this writing. To learn more about HTTP routing, refer to the Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Redirects

Redirects return HTTP 3XX responses to a client, instructing it to retrieve a different resource. A RequestRedirect filter instructs Gateways to emit a redirect response to requests that match the rule. For example, to issue a permanent redirect (301) from HTTP to HTTPS, configure requestRedirect.statusCode=301 and requestRedirect.scheme="https":

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-to-https-filter-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - redirect.example
  rules:
    - filters:
      - type: RequestRedirect
        requestRedirect:
          scheme: https
          statusCode: 301
          hostname: www.example.com
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-to-https-filter-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - redirect.example
  rules:
    - filters:
      - type: RequestRedirect
        requestRedirect:
          scheme: https
          statusCode: 301
          hostname: www.example.com

Note: 301 (default) and 302 are the only supported statusCodes.

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-to-https-filter-redirect -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying redirect.example/get should result in a 301 response from the example Gateway and redirecting to the configured redirect hostname.

$ curl -L -vvv --header "Host: redirect.example" "http://${GATEWAY_HOST}/get"
...
< HTTP/1.1 301 Moved Permanently
< location: https://www.example.com/get
...

If you followed the steps in the Secure Gateways task, you should be able to curl the redirect location.

HTTP –> HTTPS

Listeners expose the TLS setting on a per domain or subdomain basis. TLS settings of a listener are applied to all domains that satisfy the hostname criteria.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/CN=example.com' -keyout CA.key -out CA.crt
openssl req -out example.com.csr -newkey rsa:2048 -nodes -keyout tls.key -subj "/CN=example.com"

Generate a self-signed wildcard certificate for example.com with *.example.com extension

cat <<EOF | openssl x509 -req -days 365 -CA CA.crt -CAkey CA.key -set_serial 0 \
-subj "/CN=example.com" \
-in example.com.csr -out tls.crt -extensions v3_req  -extfile -
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1   = example.com
DNS.2   = *.example.com
EOF

Create the kubernetes tls secret

kubectl create secret tls example-com --key=tls.key --cert=tls.crt

Define a https listener on the existing gateway

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
    tls:
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
  - name: http
    port: 80
    protocol: HTTP
    # hostname: "*.example.com"
  - name: https
    port: 443
    protocol: HTTPS
    # hostname: "*.example.com"
    tls:
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: example-com

Check for any TLS certificate issues on the gateway.

kubectl -n default describe gateway eg

Create two HTTPRoutes and attach them to the HTTP and HTTPS listeners using the sectionName field.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: tls-redirect
spec:
  parentRefs:
    - name: eg
      sectionName: http
  hostnames:
    # - "*.example.com" # catch all hostnames
    - "www.example.com"
  rules:
    - filters:
        - type: RequestRedirect
          requestRedirect:
            scheme: https
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: tls-redirect
spec:
  parentRefs:
    - name: eg
      sectionName: http
  hostnames:
    # - "*.example.com" # catch all hostnames
    - "www.example.com"
  rules:
    - filters:
        - type: RequestRedirect
          requestRedirect:
            scheme: https
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
      sectionName: https
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Curl the example app through http listener:

curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get

Curl the example app through https listener:

curl -v -H 'Host:www.example.com' --resolve "www.example.com:443:$GATEWAY_HOST" \
--cacert CA.crt https://www.example.com:443/get

Path Redirects

Path redirects use an HTTP Path Modifier to replace either entire paths or path prefixes. For example, the HTTPRoute below will issue a 302 redirect to all path.redirect.example requests whose path begins with /get to /status/200.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-path-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.redirect.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: /get
      filters:
      - type: RequestRedirect
        requestRedirect:
          path:
            type: ReplaceFullPath
            replaceFullPath: /status/200
          statusCode: 302
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-path-redirect
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.redirect.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: /get
      filters:
      - type: RequestRedirect
        requestRedirect:
          path:
            type: ReplaceFullPath
            replaceFullPath: /status/200
          statusCode: 302

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-path-redirect -o yaml

Querying path.redirect.example should result in a 302 response from the example Gateway and a redirect location containing the configured redirect path.

Query the path.redirect.example host:

curl -vvv --header "Host: path.redirect.example" "http://${GATEWAY_HOST}/get"

You should receive a 302 with a redirect location of http://path.redirect.example/status/200.

2.13 - HTTP Request Headers

The HTTPRoute resource can modify the headers of a request before forwarding it to the upstream service. HTTPRoute rules cannot use both filter types at once. Currently, Envoy Gateway only supports core HTTPRoute filters which consist of RequestRedirect and RequestHeaderModifier at the time of this writing. To learn more about HTTP routing, refer to the Gateway API documentation.

A RequestHeaderModifier filter instructs Gateways to modify the headers in requests that match the rule before forwarding the request upstream. Note that the RequestHeaderModifier filter will only modify headers before the request is sent from Envoy to the upstream service and will not affect response headers returned to the downstream client.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Adding Request Headers

The RequestHeaderModifier filter can add new headers to a request before it is sent to the upstream. If the request does not have the header configured by the filter, then that header will be added to the request. If the request already has the header configured by the filter, then the value of the header in the filter will be appended to the value of the header in the request.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-headers -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the upstream example app received the header add-header with the value: something,foo

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
 "headers": {
  "Accept": [
   "*/*"
  ],
  "Add-Header": [
   "something",
   "foo"
  ],
...

Setting Request Headers

Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it will be added, but unlike adding request headers which will append the value of the header if the request already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the upstream example app received the header add-header with the original value something replaced by foo.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "set-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
 "headers": {
  "Accept": [
   "*/*"
  ],
  "Set-Header": [
   "foo"
  ],
...

Removing Request Headers

Headers can be removed from a request by simply supplying a list of header names.

Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it will be added, but unlike adding request headers which will append the value of the header if the request already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        remove:
        - "remove-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        remove:
        - "remove-header"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the upstream example app received the header add-header, but the header remove-header that was sent by curl was removed before the upstream received the request.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something" --header "remove-header: foo"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<

 "headers": {
  "Accept": [
   "*/*"
  ],
  "Add-Header": [
   "something"
  ],
...

Combining Filters

Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"

Early Header Modification

In some cases, it could be necessary to modify headers before the proxy performs any sort of processing, routing or tracing. Envoy Gateway supports this functionality using the ClientTrafficPolicy API.

A ClientTrafficPolicy resource can be attached to a Gateway resource to configure early header modifications for all its routes. In the following example we will demonstrate how early header modification can be configured.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: RequestHeaderModifier
      requestHeaderModifier:
        add:
          - name: early-added-header
            value: late
          - name: early-set-header
            value: late
          - name: early-removed-header
            value: late 
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-early-headers
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  headers:
    earlyRequestHeaders:
      add:
        - name: "early-added-header"
          value: "early"
      set:
        - name: "early-set-header"
          value: "early"
      remove:
        - "early-removed-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
    - name: eg
  hostnames:
    - headers.example
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      filters:
        - type: RequestHeaderModifier
          requestHeaderModifier:
            add:
              - name: early-added-header
                value: late
              - name: early-set-header
                value: late
              - name: early-removed-header
                value: late
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-early-headers
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  headers:
    earlyRequestHeaders:
      add:
        - name: "early-added-header"
          value: "early"
      set:
        - name: "early-set-header"
          value: "early"
      remove:
        - "early-removed-header"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the upstream example app received the following headers:

  • early-added-header contains early (ClientTrafficPolicy) and late (RouteFilter) values
  • early-set-header contains only early (ClientTrafficPolicy) and late (RouteFilter) values, since the early modification overwritten the client value.
  • early-removed-header contains only the late (RouteFilter) value, since the early modification deleted the client value.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header  "early-added-header: client" --header  "early-set-header: client" --header  "early-removed-header: client" 
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<

 "headers": {
  "Accept": [
   "*/*"
  ],
  "Early-Added-Header": [
   "client",
   "early",
   "late"
  ],
  "Early-Set-Header": [
   "early",
   "late"
  ],
  "Early-removed-Header": [
    "late"
  ]
...

2.14 - HTTP Response Headers

The HTTPRoute resource can modify the headers of a response before responding it to the downstream service. To learn more about HTTP routing, refer to the Gateway API documentation.

A ResponseHeaderModifier filter instructs Gateways to modify the headers in responses that match the rule before responding to the downstream. Note that the ResponseHeaderModifier filter will only modify headers before the response is returned from Envoy to the downstream client and will not affect request headers forwarding to the upstream service.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Adding Response Headers

The ResponseHeaderModifier filter can add new headers to a response before it is sent to the upstream. If the response does not have the header configured by the filter, then that header will be added to the response. If the response already has the header configured by the filter, then the value of the header in the filter will be appended to the value of the header in the response.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header"
          value: "foo"

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-headers -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the downstream client received the header add-header with the value: foo

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: X-Foo: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: X-Foo: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< x-foo: value1
< add-header: foo
<
...
 "headers": {
  "Accept": [
   "*/*"
  ],
  "X-Echo-Set-Header": [
   "X-Foo: value1"
  ]
...

Setting Response Headers

Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it will be added, but unlike adding response headers which will append the value of the header if the response already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        set:
        - name: "set-header"
          value: "foo"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the downstream client received the header set-header with the original value value1 replaced by foo.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: set-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: set-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< set-header: foo
<
 "headers": {
  "Accept": [
   "*/*"
  ],
  "X-Echo-Set-Header": [
    "set-header": value1"
  ]
...

Removing Response Headers

Headers can be removed from a response by simply supplying a list of header names.

Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it will be added, but unlike adding response headers which will append the value of the header if the response already contains it, setting a header will cause the value to be replaced by the value configured in the filter.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        remove:
        - "remove-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        remove:
        - "remove-header"

Querying headers.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate that the header remove-header that was sent by curl was removed before the upstream received the response.

$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: remove-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: remove-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<

 "headers": {
  "Accept": [
   "*/*"
  ],
  "X-Echo-Set-Header": [
    "remove-header": value1"
  ]
...

Combining Filters

Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - headers.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    filters:
    - type: ResponseHeaderModifier
      responseHeaderModifier:
        add:
        - name: "add-header-1"
          value: "foo"
        set:
        - name: "set-header-1"
          value: "bar"
        remove:
        - "removed-header"

2.15 - HTTP Routing

The HTTPRoute resource allows users to configure HTTP routing by matching HTTP traffic and forwarding it to Kubernetes backends. Currently, the only supported backend supported by Envoy Gateway is a Service resource. This task shows how to route traffic based on host, header, and path fields and forward the traffic to different Kubernetes Services. To learn more about HTTP routing, refer to the Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Installation

Install the HTTP routing example resources:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/http-routing.yaml

The manifest installs a GatewayClass, Gateway, four Deployments, four Services, and three HTTPRoute resources. The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.

Note: Envoy Gateway is configured by default to manage a GatewayClass with controllerName: gateway.envoyproxy.io/gatewayclass-controller.

Verification

Check the status of the GatewayClass:

kubectl get gc --selector=example=http-routing

The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.

A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is provisioned or configured by Envoy Gateway. The gatewayClassName defines the name of a GatewayClass used by this Gateway. Check the status of the Gateway:

kubectl get gateways --selector=example=http-routing

The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also provides the address of the Gateway. This address is used later to test connectivity to proxied backend services.

The three HTTPRoute resources create routing rules on the Gateway. In order to receive traffic from a Gateway, an HTTPRoute must be configured with parentRefs which reference the parent Gateway(s) that it should be attached to. An HTTPRoute can match against a single set of hostnames. These hostnames are matched before any other matching within the HTTPRoute takes place. Since example.com, foo.example.com, and bar.example.com are separate hosts with different routing requirements, each is deployed as its own HTTPRoute - example-route, ``foo-route, and bar-route.

Check the status of the HTTPRoutes:

kubectl get httproutes --selector=example=http-routing -o yaml

The status for each HTTPRoute should surface “Accepted=True” and a parentRef that references the example Gateway. The example-route matches any traffic for “example.com” and forwards it to the “example-svc” Service.

Testing the Configuration

Before testing HTTP routing to the example-svc backend, get the Gateway’s address.

export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')

Test HTTP routing to the example-svc backend.

curl -vvv --header "Host: example.com" "http://${GATEWAY_HOST}/"

A 200 status code should be returned and the body should include "pod": "example-backend-*" indicating the traffic was routed to the example backend service. If you change the hostname to a hostname not represented in any of the HTTPRoutes, e.g. “www.example.com”, the HTTP traffic will not be routed and a 404 should be returned.

The foo-route matches any traffic for foo.example.com and applies its routing rules to forward the traffic to the “foo-svc” Service. Since there is only one path prefix match for /login, only foo.example.com/login/* traffic will be forwarded. Test HTTP routing to the foo-svc backend.

curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/login"

A 200 status code should be returned and the body should include "pod": "foo-backend-*" indicating the traffic was routed to the foo backend service. Traffic to any other paths that do not begin with /login will not be matched by this HTTPRoute. Test this by removing /login from the request.

curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/"

The HTTP traffic will not be routed and a 404 should be returned.

Similarly, the bar-route HTTPRoute matches traffic for bar.example.com. All traffic for this hostname will be evaluated against the routing rules. The most specific match will take precedence which means that any traffic with the env:canary header will be forwarded to bar-svc-canary and if the header is missing or not canary then it’ll be forwarded to bar-svc. Test HTTP routing to the bar-svc backend.

curl -vvv --header "Host: bar.example.com" "http://${GATEWAY_HOST}/"

A 200 status code should be returned and the body should include "pod": "bar-backend-*" indicating the traffic was routed to the foo backend service.

Test HTTP routing to the bar-canary-svc backend by adding the env: canary header to the request.

curl -vvv --header "Host: bar.example.com" --header "env: canary" "http://${GATEWAY_HOST}/"

A 200 status code should be returned and the body should include "pod": "bar-canary-backend-*" indicating the traffic was routed to the foo backend service.

JWT Claims Based Routing

Users can route to a specific backend by matching on JWT claims. This can be achieved, by defining a SecurityPolicy with a jwt configuration that does the following

  • Converts jwt claims to headers, which can be used for header based routing
  • Sets the recomputeRoute field to true. This is required so that the incoming request matches on a fallback/catch all route where the JWT can be authenticated, the claims from the JWT can be converted to headers, and then the route match can be recomputed to match based on the updated headers.

For this feature to work please make sure

  • you have a fallback route rule defined, the backend for this route rule can be invalid.
  • The SecurityPolicy is applied to both the fallback route as well as the route with the claim header matches, to avoid spoofing.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: jwt-claim-routing
  jwt:
    providers:
      - name: example
        recomputeRoute: true
        claimToHeaders:
          - claim: sub
            header: x-sub
          - claim: admin
            header: x-admin
          - claim: name
            header: x-name
        remoteJWKS:
          uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: jwt-claim-routing
spec:
  parentRefs:
    - name: eg
  rules:
    - backendRefs:
        - kind: Service
          name: foo-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: John Doe
    - backendRefs:
        - kind: Service
          name: bar-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: Tom
    # catch all
    - backendRefs:
        - kind: Service
          name: infra-backend-invalid
          port: 8080
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: jwt-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: jwt-claim-routing
  jwt:
    providers:
      - name: example
        recomputeRoute: true
        claimToHeaders:
          - claim: sub
            header: x-sub
          - claim: admin
            header: x-admin
          - claim: name
            header: x-name
        remoteJWKS:
          uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: jwt-claim-routing
spec:
  parentRefs:
    - name: eg
  rules:
    - backendRefs:
        - kind: Service
          name: foo-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: John Doe
    - backendRefs:
        - kind: Service
          name: bar-svc
          port: 8080
          weight: 1
      matches:
        - headers:
            - name: x-name
              value: Tom
    # catch all
    - backendRefs:
        - kind: Service
          name: infra-backend-invalid
          port: 8080
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Get the JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode

Test routing to the foo-svc backend by specifying a JWT Token with a claim name: John Doe.

curl -sS -H "Host: foo.example.com" -H "Authorization: Bearer $TOKEN" "http://${GATEWAY_HOST}/login" | jq .pod
"foo-backend-6df8cc6b9f-fmwcg"

Get another JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/with-different-claim.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode

Test HTTP routing to the bar-svc backend by specifying a JWT Token with a claim name: Tom.

curl -sS -H "Host: bar.example.com" -H "Authorization: Bearer $TOKEN" "http://${GATEWAY_HOST}/" | jq .pod
"bar-backend-6688b8944c-s8htr"

2.16 - HTTP Timeouts

The default request timeout is set to 15 seconds in Envoy Proxy. The HTTPRouteTimeouts resource allows users to configure request timeouts for an HTTPRouteRule. This task shows you how to configure timeouts.

The HTTPRouteTimeouts supports two kinds of timeouts:

  • request: Request specifies the maximum duration for a gateway to respond to an HTTP request.
  • backendRequest: BackendRequest specifies a timeout for an individual request from the gateway to a backend.

Note: The Request duration must be >= BackendRequest duration

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Verification

backend has the ability to delay responses; we use it as the backend to control response time.

request timeout

We configure the backend to delay responses by 3 seconds, then we set the request timeout to 4 seconds. Envoy Gateway will successfully respond to the request.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "4s"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "4s"
curl --header "Host: timeout.example.com" http://${GATEWAY_HOST}/?delay=3s  -I
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 04 Mar 2024 02:34:21 GMT
content-length: 480

Then we set the request timeout to 2 seconds. In this case, Envoy Gateway will respond with a timeout.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "2s"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  hostnames:
  - timeout.example.com
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
    timeouts:
      request: "2s"
curl --header "Host: timeout.example.com" http://${GATEWAY_HOST}/?delay=3s  -v
*   Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET /?delay=3s HTTP/1.1
> Host: timeout.example.com
> User-Agent: curl/8.6.0
> Accept: */*
>


< HTTP/1.1 504 Gateway Timeout
< content-length: 24
< content-type: text/plain
< date: Mon, 04 Mar 2024 02:35:03 GMT
<
* Connection #0 to host 127.0.0.1 left intact
upstream request timeout

2.17 - HTTP URL Rewrite

HTTPURLRewriteFilter defines a filter that modifies a request during forwarding. At most one of these filters may be used on a Route rule. This MUST NOT be used on the same Route rule as a HTTPRequestRedirect filter.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Rewrite URL Prefix Path

You can configure to rewrite the prefix in the url like below. In this example, any curls to http://${GATEWAY_HOST}/get/xxx will be rewritten to http://${GATEWAY_HOST}/replace/xxx.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplacePrefixMatch
            replacePrefixMatch: /replace
      backendRefs:
      - name: backend
        port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplacePrefixMatch
            replacePrefixMatch: /replace
      backendRefs:
      - name: backend
        port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-url-rewrite -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying http://${GATEWAY_HOST}/get/origin/path should rewrite to http://${GATEWAY_HOST}/replace/origin/path.

$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path"
...
> GET /get/origin/path HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>

< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:03:28 GMT
< content-length: 503
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/replace/origin/path",
 "host": "path.rewrite.example",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Original-Path": [
   "/get/origin/path"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "fd84b842-9937-4fb5-83c7-61470d854b90"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-6fdd4b9bd8-8vlc5"
...

You can see that the X-Envoy-Original-Path is /get/origin/path, but the actual path is /replace/origin/path.

Rewrite URL Full Path

You can configure to rewrite the fullpath in the url like below. In this example, any request sent to http://${GATEWAY_HOST}/get/origin/path/xxxx will be rewritten to http://${GATEWAY_HOST}/force/replace/fullpath.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get/origin/path"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplaceFullPath
            replaceFullPath: /force/replace/fullpath
      backendRefs:
      - name: backend
        port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get/origin/path"
      filters:
      - type: URLRewrite
        urlRewrite:
          path:
            type: ReplaceFullPath
            replaceFullPath: /force/replace/fullpath
      backendRefs:
      - name: backend
        port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-url-rewrite -o yaml

Querying http://${GATEWAY_HOST}/get/origin/path/extra should rewrite the request to http://${GATEWAY_HOST}/force/replace/fullpath.

$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path/extra"
...
> GET /get/origin/path/extra HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:09:31 GMT
< content-length: 512
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/force/replace/fullpath",
 "host": "path.rewrite.example",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Original-Path": [
   "/get/origin/path/extra"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "8ab774d6-9ffa-4faa-abbb-f45b0db00895"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-6fdd4b9bd8-8vlc5"
...

You can see that the X-Envoy-Original-Path is /get/origin/path/extra, but the actual path is /force/replace/fullpath.

Rewrite URL Path with Regex

In addition to core Gateway-API rewrite options, Envoy Gateway supports extended rewrite options through the HTTPRouteFilter API. The HTTPRouteFilter API can be configured to use RE2-compatible regex matchers and substitutions to rewrite a portion of the url. In the example below, requests sent to http://${GATEWAY_HOST}/service/xxx/yyy (where xxx is a single path portion and yyy is one or more path portions) are rewritten to http://${GATEWAY_HOST}/yyy/instance/xxx. The entire path is matched and rewritten using capture groups.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-regex-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.regex.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/"
      filters:
        - type: ExtensionRef
          extensionRef:
            group: gateway.envoyproxy.io
            kind: HTTPRouteFilter
            name: regex-path-rewrite
      backendRefs:
      - name: backend
        port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: regex-path-rewrite
spec:
  urlRewrite:
    path:
      type: ReplaceRegexMatch
      replaceRegexMatch:
        pattern: '^/service/([^/]+)(/.*)$'
        substitution: '\2/instance/\1'
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-regex-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.regex.rewrite.example
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: "/"
      filters:
        - type: ExtensionRef
          extensionRef:
            group: gateway.envoyproxy.io
            kind: HTTPRouteFilter
            name: regex-path-rewrite
      backendRefs:
        - name: backend
          port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: regex-path-rewrite
spec:
  urlRewrite:
    path:
      type: ReplaceRegexMatch
      replaceRegexMatch:
        pattern: '^/service/([^/]+)(/.*)$'
        substitution: '\2/instance/\1'

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-url-regex-rewrite -o yaml

Querying http://${GATEWAY_HOST}/service/foo/v1/api should rewrite the request to http://${GATEWAY_HOST}/service/foo/v1/api.

$ curl -L -vvv --header "Host: path.regex.rewrite.example" "http://${GATEWAY_HOST}/service/foo/v1/api"
...
> GET /service/foo/v1/api HTTP/1.1
> Host: path.regex.rewrite.example
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Mon, 16 Sep 2024 18:49:48 GMT
< content-length: 482
<
{
 "path": "/v1/api/instance/foo",
 "host": "path.regex.rewrite.example",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.7.1"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "10.244.0.37"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "24a5958f-1bfa-4694-a9c1-807d5139a18a"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-765694d47f-lzmpm"
...

You can see that the path is rewritten from /service/foo/v1/api, to /v1/api/instance/foo.

Rewrite Host Name

You can configure to rewrite the hostname like below. In this example, any requests sent to http://${GATEWAY_HOST}/get with --header "Host: path.rewrite.example" will rewrite host into envoygateway.io.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          hostname: "envoygateway.io"
      backendRefs:
      - name: backend
        port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-url-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - path.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/get"
      filters:
      - type: URLRewrite
        urlRewrite:
          hostname: "envoygateway.io"
      backendRefs:
      - name: backend
        port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-url-rewrite -o yaml

Querying http://${GATEWAY_HOST}/get with --header "Host: path.rewrite.example" will rewrite host into envoygateway.io.

$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:15:15 GMT
< content-length: 481
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "envoygateway.io",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Forwarded-Host": [
   "path.rewrite.example"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "39aa447c-97b9-45a3-a675-9fb266ab1af0"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-6fdd4b9bd8-8vlc5"
...

You can see that the X-Forwarded-Host is path.rewrite.example, but the actual host is envoygateway.io.

Rewrite URL Host Name by Header or Backend

In addition to core Gateway-API rewrite options, Envoy Gateway supports extended rewrite options through the HTTPRouteFilter API. The HTTPRouteFilter API can be configured to rewrite the Host header value to:

  • The value of a different request header
  • The DNS name of the backend that the request is routed to

In the following example, the host header is rewritten to the value of the x-custom-host header.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-hostname-header-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - host.header.rewrite.example
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: "/header"
      filters:
        - type: ExtensionRef
          extensionRef:
            group: gateway.envoyproxy.io
            kind: HTTPRouteFilter
            name: header-host-rewrite
      backendRefs:
      - name: backend
        port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: header-host-rewrite
spec:
  urlRewrite:
    hostname:
      type: Header
      header: x-custom-host      
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-filter-hostname-header-rewrite
spec:
  parentRefs:
    - name: eg
  hostnames:
    - host.header.rewrite.example
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: "/header"
      filters:
        - type: ExtensionRef
          extensionRef:
            group: gateway.envoyproxy.io
            kind: HTTPRouteFilter
            name: header-host-rewrite
      backendRefs:
        - name: backend
          port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
  name: header-host-rewrite
spec:
  urlRewrite:
    hostname:
      type: Header
      header: x-custom-host   

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-filter-header-host-rewrite -o yaml

Querying http://${GATEWAY_HOST}/header and providing a custom host rewrite header x-custom-host should rewrite the request host header to the value of the x-custom-host header.

$ curl -L -vvv --header "Host: host.header.rewrite.example" --header "x-custom-host: foo" "http://${GATEWAY_HOST}/header"
...
> GET /header HTTP/1.1
> Host: host.header.rewrite.example
> User-Agent: curl/8.7.1
> Accept: */*
> x-custom-host: foo
>
* Request completely sent off
< HTTP/1.1 200 OK
<
{
 "path": "/header",
 "host": "foo",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "X-Custom-Host": [
   "foo"
  ],
  "X-Forwarded-Host": [
   "host.header.rewrite.example"
  ],
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-765694d47f-5t6f2"
...

You can see that the host is rewritten from host.header.rewrite.example, to the value of the provided x-custom-host header foo. The original host header is preserved in the X-Forwarded-Host header.

2.18 - HTTP3

This task will help you get started using HTTP3 using EG. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

TLS Certificates

Generate the certificates and keys used by the Gateway to terminate client TLS connections.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt

Create a certificate and a private key for www.example.com:

openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt

Store the cert/key in a Secret:

kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt

Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443 and references the example-cert Secret:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
        - kind: Secret
          group: ""
          name: example-cert
  '

Apply the following ClientTrafficPolicy to enable HTTP3

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-http3
spec:
  http3: {}
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-http3
spec:
  http3: {}
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Testing

Get the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Query the example app through the Gateway:

The below example uses a custom docker image with custom curl binary with built-in http3.

docker run --net=host --rm ghcr.io/macbre/curl-http3 curl -kv --http3 -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" https://www.example.com/get

It is not possible at the moment to port-forward UDP protocol in kubernetes service check out https://github.com/kubernetes/kubernetes/issues/47862. Hence we need external loadbalancer to test this feature out.

2.19 - HTTPRoute Request Mirroring

The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams. It is possible to divide the traffic between these backends using Traffic Splitting, but it is also possible to mirror requests to another Service instead. Request mirroring is accomplished using Gateway API’s HTTPRequestMirrorFilter on the HTTPRoute.

When requests are made to a HTTPRoute that uses a HTTPRequestMirrorFilter, the response will never come from the backendRef defined in the filter. Responses from the mirror backendRef are always ignored.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Mirroring the Traffic

Next, create a new Deployment and Service to mirror requests to. The following example will use a second instance of the application deployed in the quickstart.

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
EOF

Save and apply the following resources to your cluster:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
Then create an HTTPRoute that uses a HTTPRequestMirrorFilter to send requests to the original service from the quickstart, and mirror request to the service that was just deployed.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-mirror -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying backends.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate which pod handled the request. There is only one pod in the deployment for the example app from the quickstart, so it will be the same on all subsequent requests.

$ curl -v --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...

 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-79665566f5-s589f"
...

Check the logs of the pods and you will see that the original deployment and the new deployment each got a request:

$ kubectl logs deploy/backend && kubectl logs deploy/backend-2
...
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:41566)
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:45096)

Multiple BackendRefs

When an HTTPRoute has multiple backendRefs and an HTTPRequestMirrorFilter, traffic splitting will still behave the same as it normally would for the main backendRefs while the backendRef of the HTTPRequestMirrorFilter will continue receiving mirrored copies of the incoming requests.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-3
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-3
      port: 3000

Multiple HTTPRequestMirrorFilters

Multiple HTTPRequestMirrorFilters are not supported on the same HTTPRoute rule. When attempting to do so, the admission webhook will reject the configuration.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-3
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-mirror
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    filters:
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-2
          port: 3000
    - type: RequestMirror
      requestMirror:
        backendRef:
          kind: Service
          name: backend-3
          port: 3000
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
Error from server: error when creating "STDIN": admission webhook "validate.gateway.networking.k8s.io" denied the request: spec.rules[0].filters: Invalid value: "RequestMirror": cannot be used multiple times in the same rule

2.20 - HTTPRoute Traffic Splitting

The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams if they match the rules of the HTTPRoute. If an invalid backendRef is configured, then HTTP responses will be returned with status code 500 for all requests that would have been sent to that backend.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Single backendRef

When a single backendRef is configured in a HTTPRoute, it will receive 100% of the traffic.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-headers -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Querying backends.example/get should result in a 200 response from the example Gateway and the output from the example app should indicate which pod handled the request. There is only one pod in the deployment for the example app from the quickstart, so it will be the same on all subsequent requests.

$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-79665566f5-s589f"
...

Multiple backendRefs

If multiple backendRefs are configured, then traffic will be split between the backendRefs equally unless a weight is configured.

First, create a second instance of the example app from the quickstart:

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
EOF

Save and apply the following resources to your cluster:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend-2
---
apiVersion: v1
kind: Service
metadata:
  name: backend-2
  labels:
    app: backend-2
    service: backend-2
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      serviceAccountName: backend-2
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace

Then create an HTTPRoute that uses both the app from the quickstart and the second instance that was just created

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-2
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
    - group: ""
      kind: Service
      name: backend-2
      port: 3000

Querying backends.example/get should result in 200 responses from the example Gateway and the output from the example app that indicates which pod handled the request should switch between the first pod and the second one from the new deployment on subsequent requests.

$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-75bcd4c969-lsxpz"
...

Weighted backendRefs

If multiple backendRefs are configured and an un-even traffic split between the backends is desired, then the weight field can be used to control the weight of requests to each backend. If weight is not configured for a backendRef it is assumed to be 1.

The weight field in a backendRef controls the distribution of the traffic split. The proportion of requests to a single backendRef is calculated by dividing its weight by the sum of all backendRef weights in the HTTPRoute. The weight is not a percentage and the sum of all weights does not need to add up to 100.

The HTTPRoute below will configure the gateway to send 80% of the traffic to the backend service, and 20% to the backend-2 service.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 3000
      weight: 2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 3000
      weight: 2

Invalid backendRefs

backendRefs can be considered invalid for the following reasons:

  • The group field is configured to something other than "". Currently, only the core API group (specified by omitting the group field or setting it to an empty string) is supported
  • The kind field is configured to anything other than Service. Envoy Gateway currently only supports Kubernetes Service backendRefs
  • The backendRef configures a service with a namespace not permitted by any existing ReferenceGrants
  • The port field is not configured or is configured to a port that does not exist on the Service
  • The named Service configured by the backendRef cannot be found

Modifying the above example to make the backend-2 backendRef invalid by using a port that does not exist on the Service will result in 80% of the traffic being sent to the backend service, and 20% of the traffic receiving an HTTP response with status code 500.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 9000
      weight: 2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-headers
spec:
  parentRefs:
  - name: eg
  hostnames:
  - backends.example
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
      weight: 8
    - group: ""
      kind: Service
      name: backend-2
      port: 9000
      weight: 2

Querying backends.example/get should result in 200 responses 80% of the time, and 500 responses 20% of the time.

$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 500 Internal Server Error
< server: envoy
< content-length: 0
<

2.21 - Load Balancing

Envoy load balancing is a way of distributing traffic between multiple hosts within a single upstream cluster in order to effectively make use of available resources.

Envoy Gateway supports the following load balancing policies:

  • Round Robin: a simple policy in which each available upstream host is selected in round robin order.
  • Random: load balancer selects a random available host.
  • Least Request: load balancer uses different algorithms depending on whether hosts have the same or different weights.
  • Consistent Hash: load balancer implements consistent hashing to upstream hosts.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired load balancing polices. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource. If loadBalancer is not specified in BackendTrafficPolicy, the default load balancing policy is Least Request.

Prerequisites

Install Envoy Gateway

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

For better testing the load balancer, you can add more hosts in upstream cluster by increasing the replicas of one deployment:

kubectl patch deployment backend -n default -p '{"spec": {"replicas": 4}}'

Install the hey load testing tool

Install the Hey CLI tool, this tool will be used to generate load and measure response times.

Follow the installation instruction from the Hey project docs.

Round Robin

This example will create a Load Balancer with Round Robin policy via BackendTrafficPolicy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: round-robin-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: round-robin-route
  loadBalancer:
    type: RoundRobin
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: round-robin-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /round
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: round-robin-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: round-robin-route
  loadBalancer:
    type: RoundRobin
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: round-robin-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /round
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/round
Summary:
  Total:	0.0487 secs
  Slowest:	0.0440 secs
  Fastest:	0.0181 secs
  Average:	0.0307 secs
  Requests/sec:	2053.1676

  Total data:	50500 bytes
  Size/request:	505 bytes

Response time histogram:
  0.018 [1]	    |■■
  0.021 [2]  	|■■■■
  0.023 [10]	|■■■■■■■■■■■■■■■■■■■■■■
  0.026 [16]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.028 [7]  	|■■■■■■■■■■■■■■■■
  0.031 [10]	|■■■■■■■■■■■■■■■■■■■■■■
  0.034 [17]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.036 [18]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.039 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■
  0.041 [6] 	|■■■■■■■■■■■■■
  0.044 [2]	    |■■■■

As a result, you can see all available upstream hosts receive traffics evenly.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-2gfp7: received 26 requests
backend-69fcff487f-69g8c: received 25 requests
backend-69fcff487f-bqwpr: received 24 requests
backend-69fcff487f-kbn8l: received 25 requests

You should note that this results may vary, the output here is for reference purpose only.

Random

This example will create a Load Balancer with Random policy via BackendTrafficPolicy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: random-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: random-route
  loadBalancer:
    type: Random
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: random-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /random
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: random-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: random-route
  loadBalancer:
    type: Random
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: random-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /random
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 1000 concurrent requests.

hey -n 1000 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/random
Summary:
  Total:	0.2624 secs
  Slowest:	0.0851 secs
  Fastest:	0.0007 secs
  Average:	0.0179 secs
  Requests/sec:	3811.3020

  Total data:	506000 bytes
  Size/request:	506 bytes

Response time histogram:
  0.001 [1] 	|
  0.009 [421]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.018 [219]	|■■■■■■■■■■■■■■■■■■■■■
  0.026 [118]	|■■■■■■■■■■■
  0.034 [64]	|■■■■■■
  0.043 [73]	|■■■■■■■
  0.051 [41]	|■■■■
  0.060 [22]	|■■
  0.068 [19]	|■■
  0.077 [13]	|■
  0.085 [9] 	|■

As a result, you can see all available upstream hosts receive traffics randomly.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-bf6lm: received 246 requests
backend-69fcff487f-gwmqk: received 256 requests
backend-69fcff487f-mzngr: received 230 requests
backend-69fcff487f-xghqq: received 268 requests

You should note that this results may vary, the output here is for reference purpose only.

Least Request

This example will create a Load Balancer with Least Request policy via BackendTrafficPolicy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: least-request-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: least-request-route
  loadBalancer:
    type: LeastRequest
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: least-request-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /least
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: least-request-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: least-request-route
  loadBalancer:
    type: LeastRequest
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: least-request-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /least
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/least
Summary:
  Total:	0.0489 secs
  Slowest:	0.0479 secs
  Fastest:	0.0054 secs
  Average:	0.0297 secs
  Requests/sec:	2045.9317

  Total data:	50500 bytes
  Size/request:	505 bytes

Response time histogram:
  0.005 [1] 	|■■
  0.010 [1] 	|■■
  0.014 [8] 	|■■■■■■■■■■■■■■■
  0.018 [6] 	|■■■■■■■■■■■
  0.022 [11]	|■■■■■■■■■■■■■■■■■■■■
  0.027 [7] 	|■■■■■■■■■■■■■
  0.031 [15]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.035 [13]	|■■■■■■■■■■■■■■■■■■■■■■■■
  0.039 [22]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.044 [12]	|■■■■■■■■■■■■■■■■■■■■■■
  0.048 [4] 	|■■■■■■■

As a result, you can see all available upstream hosts receive traffics randomly, and host backend-69fcff487f-6l2pw receives fewer requests than others.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-59hvs: received 24 requests
backend-69fcff487f-6l2pw: received 19 requests
backend-69fcff487f-ktsx4: received 30 requests
backend-69fcff487f-nqxc7: received 27 requests

If you send one more requests to the ${GATEWAY_HOST}/least, you can tell that host backend-69fcff487f-6l2pw is very likely to get the attention of load balancer and receive this request.

backend-69fcff487f-59hvs: received 24 requests
backend-69fcff487f-6l2pw: received 20 requests
backend-69fcff487f-ktsx4: received 30 requests
backend-69fcff487f-nqxc7: received 27 requests

You should note that this results may vary, the output here is for reference purpose only.

Consistent Hash

This example will create a Load Balancer with Consistent Hash policy via BackendTrafficPolicy.

The underlying consistent hash algorithm that Envoy Gateway utilise is Maglev, and it can derive hash from following aspects:

  • SourceIP
  • Header
  • Cookie

They are also the supported value as consistent hash type.

Source IP

This example will create a Load Balancer with Source IP based Consistent Hash policy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: source-ip-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: source-ip-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: SourceIP
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: source-ip-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /source
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: source-ip-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: source-ip-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: SourceIP
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: source-ip-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /source
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/source
Summary:
  Total:	0.0539 secs
  Slowest:	0.0500 secs
  Fastest:	0.0198 secs
  Average:	0.0340 secs
  Requests/sec:	1856.5666

  Total data:	50600 bytes
  Size/request:	506 bytes

Response time histogram:
  0.020 [1] 	|■■
  0.023 [5] 	|■■■■■■■■■■■
  0.026 [12]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.029 [16]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.032 [11]	|■■■■■■■■■■■■■■■■■■■■■■■■
  0.035 [7] 	|■■■■■■■■■■■■■■■■
  0.038 [8] 	|■■■■■■■■■■■■■■■■■■
  0.041 [18]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.044 [15]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.047 [4] 	|■■■■■■■■■
  0.050 [3] 	|■■■■■■■

As a result, you can see all traffics are routed to only one upstream host, since the client that send requests has the same source IP.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-grzkj: received 0 requests
backend-69fcff487f-n4d8w: received 100 requests
backend-69fcff487f-tb7zx: received 0 requests
backend-69fcff487f-wbzpg: received 0 requests

You can try different client to send out these requests, the upstream host that receives traffics may vary.

This example will create a Load Balancer with Header based Consistent Hash policy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: header-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: header-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: Header
      header:
        name: FooBar
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: header-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /header
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: header-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: header-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: Header
      header:
        name: FooBar
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: header-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /header
      backendRefs:
        - name: backend
          port: 3000

The hey tool will be used to generate 100 concurrent requests.

hey -n 100 -c 100 -host "www.example.com" -H "FooBar: 1.2.3.4" http://${GATEWAY_HOST}/header
Summary:
  Total:	0.0579 secs
  Slowest:	0.0510 secs
  Fastest:	0.0323 secs
  Average:	0.0431 secs
  Requests/sec:	1728.6064

  Total data:	53800 bytes
  Size/request:	538 bytes

Response time histogram:
  0.032 [1] 	|■■
  0.034 [3] 	|■■■■■■
  0.036 [1] 	|■■
  0.038 [1] 	|■■
  0.040 [7] 	|■■■■■■■■■■■■■■
  0.042 [20]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.044 [20]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.045 [20]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.047 [16]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  0.049 [9] 	|■■■■■■■■■■■■■■■■■■
  0.051 [2] 	|■■■■

As a result, you can see all traffics are routed to only one upstream host, since the header of all requests are the same.

kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-dvt9r: received 0 requests
backend-69fcff487f-f8qdl: received 100 requests
backend-69fcff487f-gnpm4: received 0 requests
backend-69fcff487f-t2pgm: received 0 requests

You can try to add different header to these requests, and the upstream host that receives traffics may vary. The following output happens when you use hey to send another 100 requests with header FooBar: 5.6.7.8.

backend-69fcff487f-dvt9r: received 0 requests
backend-69fcff487f-f8qdl: received 100 requests
backend-69fcff487f-gnpm4: received 100 requests
backend-69fcff487f-t2pgm: received 0 requests

This example will create a Load Balancer with Cookie based Consistent Hash policy.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: cookie-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: cookie-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: Cookie
      cookie:
        name: FooBar
        ttl: 60s
        attributes:
          SameSite: Strict
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: cookie-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /cookie
      backendRefs:
        - name: backend
          port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: cookie-policy
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: cookie-route
  loadBalancer:
    type: ConsistentHash
    consistentHash:
      type: Cookie
      cookie:
        name: FooBar
        ttl: 60s
        attributes:
          SameSite: Strict
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: cookie-route
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /cookie
      backendRefs:
        - name: backend
          port: 3000

By sending 10 request with curl to the ${GATEWAY_HOST}/cookie, you can see that all requests got routed to only one upstream host, since they have same cookie setting.

for i in {1..10}; do curl -I --header "Host: www.example.com" --cookie "FooBar=1.2.3.4" http://${GATEWAY_HOST}/cookie ; sleep 1; done
kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-5dxz9: received 0 requests
backend-69fcff487f-gpvl2: received 0 requests
backend-69fcff487f-pglgv: received 10 requests
backend-69fcff487f-qxr74: received 0 requests

You can try to set different cookie to these requests, the upstream host that receives traffics may vary. The following output happens when you use curl to send another 10 requests with cookie FooBar: 5.6.7.8.

backend-69fcff487f-dvt9r: received 0 requests
backend-69fcff487f-f8qdl: received 0 requests
backend-69fcff487f-gnpm4: received 10 requests
backend-69fcff487f-t2pgm: received 10 requests

If the cookie has not been set in one request, Envoy Gateway will auto-generate a cookie for this request according to the ttl and attributes field.

In this example, the following cookie will be generated (see set-cookie header in response) if sending a request without cookie:

curl -v --header "Host: www.example.com" http://${GATEWAY_HOST}/cookie
> GET /cookie HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.74.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 19 Jul 2024 16:49:57 GMT
< content-length: 458
< set-cookie: FooBar="88358b9442700c56"; Max-Age=60; SameSite=Strict; HttpOnly
<
{
 "path": "/cookie",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.74.0"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "10.244.0.1"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "1adeaaf7-d45c-48c8-9a4d-eadbccb2fd50"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-69fcff487f-5dxz9"

2.22 - Local Rate Limit

Rate limit is a feature that allows the user to limit the number of incoming requests to a predefined value based on attributes within the traffic flow.

Here are some reasons why you may want to implement Rate limits

  • To prevent malicious activity such as DDoS attacks.
  • To prevent applications and its resources (such as a database) from getting overloaded.
  • To create API limits based on user entitlements.

Envoy Gateway supports two types of rate limiting: Global rate limiting and Local rate limiting.

Local rate limiting applies rate limits to the traffic flowing through a single instance of Envoy proxy. This means that if the data plane has 2 replicas of Envoy running, and the rate limit is 10 requests/second, each replica will allow 10 requests/second. This is in contrast to Global Rate Limiting which applies rate limits to the traffic flowing through all instances of Envoy proxy.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their rate limit intent. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Note: Limit is applied per route. Even if a BackendTrafficPolicy targets a gateway, each route in that gateway still has a separate rate limit bucket. For example, if a gateway has 2 routes, and the limit is 100r/s, then each route has its own 100r/s rate limit bucket.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Rate Limit Specific User

Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id header with a value set to one.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Local
    local:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
        limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Local
    local:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
        limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-ratelimit -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Let’s query ratelimit.example/get 4 times. We should receive a 200 response from the example Gateway for the first 3 requests and then receive a 429 status code for the 4th request since the limit is set at 3 requests/Hour for the request which contains the header x-user-id and value one.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

You should be able to send requests with the x-user-id header and a different value and receive successful responses from the server.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:36 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:37 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:38 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:39 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

Rate Limit Specific User Unless within Test Org

Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id header with a value set to one. But the user must not be limited if logging in within Test org, determined by custom header x-org-id set to test.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Local
    local:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
          - name: x-org-id
            value: test
            invert: true
        limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Local
    local:
      rules:
      - clientSelectors:
        - headers:
          - name: x-user-id
            value: one
          - name: x-org-id
            value: test
            invert: true
        limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000

The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.

kubectl get httproute/http-ratelimit -o yaml

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Let’s query ratelimit.example/get 4 times with x-user-id set to one and x-org-id set to org1. We should receive a 200 response from the example Gateway for the first 3 requests and the last request should be rate limited.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" --header "x-org-id: org1" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

Let’s query ratelimit.example/get 4 times with x-user-id set to one and x-org-id set to test. We should receive a 200 response from the example Gateway for all the 4 requests, unlike previous example where the last request was rate limited.

for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" --header "x-org-id: test" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

Rate Limit All Requests

This example shows you how to rate limit all requests matching the HTTPRoute rule at 3 requests/Hour by leaving the clientSelectors field unset.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Local
    local:
      rules:
      - limit:
          requests: 3
          unit: Hour
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy 
metadata:
  name: policy-httproute
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: http-ratelimit
  rateLimit:
    type: Local
    local:
      rules:
      - limit:
          requests: 3
          unit: Hour

HTTPRoute

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: http-ratelimit
spec:
  parentRefs:
  - name: eg
  hostnames:
  - ratelimit.example 
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - group: ""
      kind: Service
      name: backend
      port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy

HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy

HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked

Note: Local rate limiting does not support distinct matching. If you want to rate limit based on distinct values, you should use Global Rate Limiting.

2.23 - Multicluster Service Routing

The Multicluster Service API ServiceImport object can be used as part of the GatewayAPI backendRef for configuring routes. For more information about multicluster service API follow sig documentation.

We will use Submariner project for setting up the multicluster environment for exporting the service to be routed from peer clusters.

Setting KIND clusters and installing Submariner.

  • We will be using KIND clusters to demonstrate this example.
git clone https://github.com/submariner-io/submariner-operator
cd submariner-operator
make clusters

Note: remain in submariner-operator directory for the rest of the steps in this section

  • Install subctl:
curl -Ls https://get.submariner.io  | VERSION=v0.14.6 bash
  • Set up multicluster service API and submariner for cross cluster traffic using ServiceImport
subctl deploy-broker --kubeconfig output/kubeconfigs/kind-config-cluster1 --globalnet
subctl join --kubeconfig output/kubeconfigs/kind-config-cluster1 broker-info.subm --clusterid cluster1 --natt=false
subctl join --kubeconfig output/kubeconfigs/kind-config-cluster2 broker-info.subm --clusterid cluster2 --natt=false

Once the above steps are done and all the pods are up in both the clusters. We are ready for installing envoy gateway.

Install EnvoyGateway

Install the Gateway API CRDs and Envoy Gateway in cluster1:

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace --kubeconfig output/kubeconfigs/kind-config-cluster1

Wait for Envoy Gateway to become available:

kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available --kubeconfig output/kubeconfigs/kind-config-cluster1

Install Application

Install the backend application in cluster2 and export it through subctl command.

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/application.yaml --kubeconfig output/kubeconfigs/kind-config-cluster2
subctl export service backend --namespace default --kubeconfig output/kubeconfigs/kind-config-cluster2

Create Gateway API Objects

Create the Gateway API objects GatewayClass, Gateway and HTTPRoute in cluster1 to set up the routing.

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/multicluster-service.yaml --kubeconfig output/kubeconfigs/kind-config-cluster1

Testing the Configuration

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8888:80 &

Curl the example app through Envoy proxy:

curl --verbose --header "Host: www.example.com" http://localhost:8888/get

2.24 - Response Override

Response Override allows you to override the response from the backend with a custom one. This can be useful for scenarios such as returning a custom 404 page when the requested resource is not found or a custom 500 error message when the backend is failing.

Installation

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Testing Response Override

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: response-override
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  responseOverride:
    - match:
        statusCodes:
          - type: Value
            value: 404
      response:
        contentType: text/plain
        body:
          type: Inline
          inline: "Oops! Your request is not found."
    - match:
        statusCodes:
          - type: Value
            value: 500
          - type: Range
            range:
              start: 501
              end: 511
      response:
        contentType: application/json
        body:
          type: ValueRef
          valueRef:
            group: ""
            kind: ConfigMap
            name: response-override-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: response-override-config
data:
  response.body: '{"error": "Internal Server Error"}'
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: response-override
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  responseOverride:
    - match:
        statusCodes:
          - type: Value
            value: 404
      response:
        contentType: text/plain
        body:
          type: Inline
          inline: "Oops! Your request is not found."
    - match:
        statusCodes:
          - type: Value
            value: 500
          - type: Range
            range:
              start: 501
              end: 511
      response:
        contentType: application/json
        body:
          type: ValueRef
          valueRef:
            group: ""
            kind: ConfigMap
            name: response-override-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: response-override-config
data:
  response.body: '{"error": "Internal Server Error"}'
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/status/404
*   Trying 127.0.0.1:80...
* Connected to 172.18.0.200 (172.18.0.200) port 80
> GET /status/404 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< content-type: text/plain
< content-length: 32
< date: Thu, 07 Nov 2024 09:22:29 GMT
<
* Connection #0 to host 172.18.0.200 left intact
Oops! Your request is not found.
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/status/500
*   Trying 127.0.0.1:80...
* Connected to 172.18.0.200 (172.18.0.200) port 80
> GET /status/500 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< content-type: application/json
< content-length: 34
< date: Thu, 07 Nov 2024 09:23:02 GMT
<
* Connection #0 to host 172.18.0.200 left intact
{"error": "Internal Server Error"}

2.25 - Retry

A retry setting specifies the maximum number of times an Envoy proxy attempts to connect to a service if the initial call fails. Retries can enhance service availability and application performance by making sure that calls don’t fail permanently because of transient problems such as a temporarily overloaded service or network. The interval between retries prevents the called service from being overwhelmed with requests.

Envoy Gateway supports the following retry settings:

  • NumRetries: is the number of retries to be attempted. Defaults to 2.
  • RetryOn: specifies the retry trigger condition.
  • PerRetryPolicy: is the retry policy to be applied per retry attempt.

Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired retry settings. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Note: There are distinct circuit breaker counters for each BackendReference in an xRoute rule. Even if a BackendTrafficPolicy targets a Gateway, each BackendReference in that gateway still has separate circuit breaker counter.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Test and customize retry settings

Before applying a BackendTrafficPolicy with retry setting to a route, let’s test the default retry settings.

curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/status/500"

It will return 500 response immediately.

*   Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /status/500 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< date: Fri, 01 Mar 2024 15:12:55 GMT
< content-length: 0
<
* Connection #0 to host 172.18.255.200 left intact

Let’s create a BackendTrafficPolicy with a retry setting.

The request will be retried 5 times with a 100ms base interval and a 10s maximum interval.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: retry-for-route
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  retry:
    numRetries: 5
    perRetry:
      backOff:
        baseInterval: 100ms
        maxInterval: 10s
      timeout: 250ms
    retryOn:
      httpStatusCodes:
        - 500
      triggers:
        - connect-failure
        - retriable-status-codes
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
  name: retry-for-route
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  retry:
    numRetries: 5
    perRetry:
      backOff:
        baseInterval: 100ms
        maxInterval: 10s
      timeout: 250ms
    retryOn:
      httpStatusCodes:
        - 500
      triggers:
        - connect-failure
        - retriable-status-codes

Execute the test again.

curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/status/500"

It will return 500 response after a few while.

*   Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /status/500 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< date: Fri, 01 Mar 2024 15:15:53 GMT
< content-length: 0
<
* Connection #0 to host 172.18.255.200 left intact

Let’s check the stats to see the retry behavior.

egctl x stats envoy-proxy -n envoy-gateway-system -l gateway.envoyproxy.io/owning-gateway-name=eg,gateway.envoyproxy.io/owning-gateway-namespace=default | grep "envoy_cluster_upstream_rq_retry{envoy_cluster_name=\"httproute/default/backend/rule/0\"}"

You will expect to see the stats.

envoy_cluster_upstream_rq_retry{envoy_cluster_name="httproute/default/backend/rule/0"} 5

2.26 - Routing outside Kubernetes

Routing to endpoints outside the Kubernetes cluster where Envoy Gateway and its corresponding Envoy Proxy fleet is running is a common use case. This can be achieved by:

Installation

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Configuration

Define a Service and EndpointSlice that represents https://httpbin.org

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  namespace: default
spec:
  ports:
    - port: 443
      protocol: TCP
      targetPort: 443
      name: https
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: httpbin
  namespace: default
  labels:
    kubernetes.io/service-name: httpbin 
addressType: FQDN
ports:
- name: https
  protocol: TCP
  port: 443
endpoints:
- addresses:
  - "httpbin.org"
EOF

Save and apply the following resources to your cluster:

---
apiVersion: v1
kind: Service
metadata:
  name: httpbin
  namespace: default
spec:
  ports:
    - port: 443
      protocol: TCP
      targetPort: 443
      name: https
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
  name: httpbin
  namespace: default
  labels:
    kubernetes.io/service-name: httpbin 
addressType: FQDN
ports:
- name: https
  protocol: TCP
  port: 443
endpoints:
- addresses:
  - "httpbin.org"

Update the Gateway to include a TLS Listener on port 443

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: tls
      protocol: TLS
      port: 443
      tls:
        mode: Passthrough
  '

Add a TLSRoute that can route incoming traffic to the above backend that we created

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:
  name: httpbin 
spec:
  parentRefs:
  - name: eg 
    sectionName: tls
  rules:
  - backendRefs:
    - name: httpbin
      port: 443
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:
  name: httpbin 
spec:
  parentRefs:
  - name: eg 
    sectionName: tls
  rules:
  - backendRefs:
    - name: httpbin
      port: 443

Get the Gateway address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Send a request and view the response:

curl -I -HHost:httpbin.org --resolve "httpbin.org:443:${GATEWAY_HOST}" https://httpbin.org/

2.27 - TCP Routing

TCPRoute provides a way to route TCP requests. When combined with a Gateway listener, it can be used to forward connections on the port specified by the listener to a set of backends specified by the TCPRoute. To learn more about HTTP routing, refer to the Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Configuration

In this example, we have one Gateway resource and two TCPRoute resources that distribute the traffic with the following rules:

All TCP streams on port 8088 of the Gateway are forwarded to port 3001 of foo Kubernetes Service. All TCP streams on port 8089 of the Gateway are forwarded to port 3002 of bar Kubernetes Service. In this example two TCP listeners will be applied to the Gateway in order to route them to two separate backend TCPRoutes, note that the protocol set for the listeners on the Gateway is TCP:

Install the GatewayClass and a tcp-gateway Gateway first.

cat <<EOF | kubectl apply -f -
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: tcp-gateway
spec:
  gatewayClassName: eg
  listeners:
  - name: foo
    protocol: TCP
    port: 8088
    allowedRoutes:
      kinds:
      - kind: TCPRoute
  - name: bar
    protocol: TCP
    port: 8089
    allowedRoutes:
      kinds:
      - kind: TCPRoute
EOF

Save and apply the following resources to your cluster:

---
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: tcp-gateway
spec:
  gatewayClassName: eg
  listeners:
  - name: foo
    protocol: TCP
    port: 8088
    allowedRoutes:
      kinds:
      - kind: TCPRoute
  - name: bar
    protocol: TCP
    port: 8089
    allowedRoutes:
      kinds:
      - kind: TCPRoute

Install two services foo and bar, which are bound to backend-1 and backend-2.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: foo
  labels:
    app: backend-1
spec:
  ports:
    - name: http
      port: 3001
      targetPort: 3000
  selector:
    app: backend-1
---
apiVersion: v1
kind: Service
metadata:
  name: bar
  labels:
    app: backend-2
spec:
  ports:
    - name: http
      port: 3002
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-1
      version: v1
  template:
    metadata:
      labels:
        app: backend-1
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-1
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: SERVICE_NAME
              value: foo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: SERVICE_NAME
              value: bar
EOF

Save and apply the following resources to your cluster:

---
apiVersion: v1
kind: Service
metadata:
  name: foo
  labels:
    app: backend-1
spec:
  ports:
    - name: http
      port: 3001
      targetPort: 3000
  selector:
    app: backend-1
---
apiVersion: v1
kind: Service
metadata:
  name: bar
  labels:
    app: backend-2
spec:
  ports:
    - name: http
      port: 3002
      targetPort: 3000
  selector:
    app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-1
      version: v1
  template:
    metadata:
      labels:
        app: backend-1
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-1
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: SERVICE_NAME
              value: foo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend-2
      version: v1
  template:
    metadata:
      labels:
        app: backend-2
        version: v1
    spec:
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend-2
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: SERVICE_NAME
              value: bar

Install two TCPRoutes tcp-app-1 and tcp-app-2 with different sectionName:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
  name: tcp-app-1
spec:
  parentRefs:
  - name: tcp-gateway
    sectionName: foo
  rules:
  - backendRefs:
    - name: foo
      port: 3001
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
  name: tcp-app-2
spec:
  parentRefs:
  - name: tcp-gateway
    sectionName: bar
  rules:
  - backendRefs:
    - name: bar
      port: 3002
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
  name: tcp-app-1
spec:
  parentRefs:
  - name: tcp-gateway
    sectionName: foo
  rules:
  - backendRefs:
    - name: foo
      port: 3001
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
  name: tcp-app-2
spec:
  parentRefs:
  - name: tcp-gateway
    sectionName: bar
  rules:
  - backendRefs:
    - name: bar
      port: 3002

In the above example we separate the traffic for the two separate backend TCP Services by using the sectionName field in the parentRefs:

spec:
  parentRefs:
  - name: tcp-gateway
    sectionName: foo

This corresponds directly with the name in the listeners in the Gateway:

  listeners:
  - name: foo
    protocol: TCP
    port: 8088
  - name: bar
    protocol: TCP
    port: 8089

In this way each TCPRoute “attaches” itself to a different port on the Gateway so that the foo service is taking traffic for port 8088 from outside the cluster and bar service takes the port 8089 traffic.

Before testing, please get the tcp-gateway Gateway’s address first:

export GATEWAY_HOST=$(kubectl get gateway/tcp-gateway -o jsonpath='{.status.addresses[0].value}')

You can try to use nc to test the TCP connections of envoy gateway with different ports, and you can see them succeeded:

nc -zv ${GATEWAY_HOST} 8088

nc -zv ${GATEWAY_HOST} 8089

You can also try to send requests to envoy gateway and get responses as shown below:

curl -i "http://${GATEWAY_HOST}:8088"

HTTP/1.1 200 OK
Content-Type: application/json
X-Content-Type-Options: nosniff
Date: Tue, 03 Jan 2023 10:18:36 GMT
Content-Length: 267

{
 "path": "/",
 "host": "xxx.xxx.xxx.xxx:8088",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "foo",
 "pod": "backend-1-c6c5fb958-dl8vl"
}

You can see that the traffic routing to foo service when sending request to 8088 port.

curl -i "http://${GATEWAY_HOST}:8089"

HTTP/1.1 200 OK
Content-Type: application/json
X-Content-Type-Options: nosniff
Date: Tue, 03 Jan 2023 10:19:28 GMT
Content-Length: 267

{
 "path": "/",
 "host": "xxx.xxx.xxx.xxx:8089",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.85.0"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "bar",
 "pod": "backend-2-98fcff498-hcmgb"
}                                            

You can see that the traffic routing to bar service when sending request to 8089 port.

2.28 - UDP Routing

The UDPRoute resource allows users to configure UDP routing by matching UDP traffic and forwarding it to Kubernetes backends. This task will use CoreDNS example to walk you through the steps required to configure UDPRoute on Envoy Gateway.

Note: UDPRoute allows Envoy Gateway to operate as a non-transparent proxy between a UDP client and server. The lack of transparency means that the upstream server will see the source IP and port of the Gateway instead of the client. For additional information, refer to Envoy’s UDP proxy documentation.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Installation

Install CoreDNS in the Kubernetes cluster as the example backend. The installed CoreDNS is listening on UDP port 53 for DNS lookups.

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/udp-routing-example-backend.yaml

Wait for the CoreDNS deployment to become available:

kubectl wait --timeout=5m deployment/coredns --for=condition=Available

Update the Gateway from the Quickstart to include a UDP listener that listens on UDP port 5300:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: coredns
      protocol: UDP
      port: 5300
      allowedRoutes:
        kinds:
        - kind: UDPRoute
  '

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Configuration

Create a UDPRoute resource to route UDP traffic received on Gateway port 5300 to the CoredDNS backend.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
  name: coredns
spec:
  parentRefs:
    - name: eg
      sectionName: coredns
  rules:
    - backendRefs:
        - name: coredns
          port: 53
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
  name: coredns
spec:
  parentRefs:
    - name: eg
      sectionName: coredns
  rules:
    - backendRefs:
        - name: coredns
          port: 53

Verify the UDPRoute status:

kubectl get udproute/coredns -o yaml

Testing

Get the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Use dig command to query the dns entry foo.bar.com through the Gateway.

dig @${GATEWAY_HOST} -p 5300 foo.bar.com

You should see the result of the dns query as the below output, which means that the dns query has been successfully routed to the backend CoreDNS.

Note: 49.51.177.138 is the resolved address of GATEWAY_HOST.

; <<>> DiG 9.18.1-1ubuntu1.1-Ubuntu <<>> @49.51.177.138 -p 5300 foo.bar.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58125
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 3
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 24fb86eba96ebf62 (echoed)
;; QUESTION SECTION:
;foo.bar.com.			IN	A

;; ADDITIONAL SECTION:
foo.bar.com.		0	IN	A	10.244.0.19
_udp.foo.bar.com.	0	IN	SRV	0 0 42376 .

;; Query time: 1 msec
;; SERVER: 49.51.177.138#5300(49.51.177.138) (UDP)
;; WHEN: Fri Jan 13 10:20:34 UTC 2023
;; MSG SIZE  rcvd: 114

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway.

Delete the CoreDNS example manifest and the UDPRoute:

kubectl delete deploy/coredns
kubectl delete service/coredns
kubectl delete cm/coredns
kubectl delete udproute/coredns

Next Steps

Checkout the Developer Guide to get involved in the project.

3 - Security

This section includes Security tasks.

3.1 - Accelerating TLS Handshakes using Private Key Provider in Envoy

TLS operations can be accelerated or the private key can be protected using specialized hardware. This can be leveraged in Envoy using Envoy Private Key Provider is added to Envoy.

Today, there are two private key providers implemented in Envoy as contrib extensions:

Both of them are used to accelerate the TLS handshake through the hardware capabilities.

This task will walk you through the steps required to configure TLS Termination mode for TCP traffic while also using the Envoy Private Key Provider to accelerate the TLS handshake by leveraging QAT and the HW accelerator available on Intel SPR/EMR Xeon server platforms.

Prerequisites

  • Install Linux kernel 5.17 or similar

  • Ensure the node has QAT devices by checking the QAT physical function devices presented. Supported Devices

    echo `(lspci -d 8086:4940 && lspci -d 8086:4941 && lspci -d 8086:4942 && lspci -d 8086:4943 && lspci -d 8086:4946 && lspci -d 8086:4947) | wc -l` supported devices found.
    
  • Enable IOMMU from BIOS

  • Enable IOMMU for Linux kernel

    Figure out the QAT VF device id

    lspci -d 8086:4941 && lspci -d 8086:4943 && lspci -d 8086:4947
    

    Attach the QAT device to vfio-pci through kernel parameter by the device id gotten from previous command.

    cat /etc/default/grub:
    GRUB_CMDLINE_LINUX="intel_iommu=on vfio-pci.ids=[QAT device id]"
    update-grub
    reboot
    

    Once the system is rebooted, check if the IOMMU has been enabled via the following command:

    dmesg| grep IOMMU
    [    1.528237] DMAR: IOMMU enabled
    
  • Enable virtual function devices for QAT device

    modprobe vfio_pci
    rmmod qat_4xxx
    modprobe qat_4xxx
    qat_device=$(lspci -D -d :[QAT device id] | awk '{print $1}')
    for i in $qat_device; do echo 16|sudo tee /sys/bus/pci/devices/$i/sriov_numvfs; done
    chmod a+rw /dev/vfio/*
    
  • Increase the container runtime memory lock limit (using the containerd as example here)

    mkdir /etc/systemd/system/containerd.service.d
    cat <<EOF >>/etc/systemd/system/containerd.service.d/memlock.conf
    [Service]
    LimitMEMLOCK=134217728
    EOF
    

    Restart the container runtime (for containerd, CRIO has similar concept)

    systemctl daemon-reload
    systemctl restart containerd
    
  • Install Intel® QAT Device Plugin for Kubernetes

    kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/qat_plugin?ref=main'
    

    Verification of the plugin deployment and detection of QAT hardware can be confirmed by examining the resource allocations on the nodes:

    kubectl get node -o yaml| grep qat.intel.com
    

It required the node with 3rd generation Intel Xeon Scalable processor server processors, or later.

  • For kubernetes Cluster, if not all nodes that support Intel® AVX-512 in Kubernetes cluster, you need to add some labels to divide these two kinds of nodes manually or using NFD.

    kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.15.1
    
  • Checking the available nodes with required cpu instructions:

    • Check the node labels if using NFD:

      kubectl get nodes -l feature.node.kubernetes.io/cpu-cpuid.AVX512F,feature.node.kubernetes.io/cpu-cpuid.AVX512DQ,feature.node.kubernetes.io/cpu-cpuid.AVX512BW,feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2,feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA
      
    • Check CPUIDS manually on the node if without using NFD:

      cat /proc/cpuinfo |grep avx512f|grep avx512dq|grep avx512bw|grep avx512_vbmi2|grep avx512ifma
      

Installation

  • Follow the steps from the Quickstart to install Envoy Gateway.

  • Enable the EnvoyPatchPolicy feature, which will allow us to directly configure the Private Key Provider Envoy Filter, since Envoy Gateway does not directly expose this functionality.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableEnvoyPatchPolicy: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableEnvoyPatchPolicy: true    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Create gateway for TLS termination

  • Follow the instructions in TLS Termination for TCP to setup a TCP gateway to terminate the TLS connection.

  • Update GatewayClass for using the envoyproxy image with contrib extensions and requests required resources.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: envoy-gateway-system
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: envoy-gateway-system

Change EnvoyProxy configuration

Using the envoyproxy image with contrib extensions and add qat resources requesting, ensure the k8s scheduler find out a machine with required resource.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  concurrency: 1
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        type: NodePort
      envoyDeployment:
        container:
          image: envoyproxy/envoy-contrib-dev:latest
          resources:
            requests:
              cpu: 1000m
              memory: 4096Mi
              qat.intel.com/cy: '1'
            limits:
              cpu: 1000m
              memory: 4096Mi
              qat.intel.com/cy: '1'
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  concurrency: 1
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        type: NodePort
      envoyDeployment:
        container:
          image: envoyproxy/envoy-contrib-dev:latest
          resources:
            requests:
              cpu: 1000m
              memory: 4096Mi
              qat.intel.com/cy: '1'
            limits:
              cpu: 1000m
              memory: 4096Mi
              qat.intel.com/cy: '1'

Using the envoyproxy image with contrib extensions and add the node affinity to scheduling the Envoy Gateway pod on the machine with required CPU instructions.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  concurrency: 1
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        type: NodePort
      envoyDeployment:
        container:
          image: envoyproxy/envoy-contrib-dev:latest
          resources:
            requests:
              cpu: 1000m
              memory: 4096Mi
            limits:
              cpu: 1000m
              memory: 4096Mi
        pod:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512F
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512DQ
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512BW
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2
                    operator: Exists
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  concurrency: 1
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        type: NodePort
      envoyDeployment:
        container:
          image: envoyproxy/envoy-contrib-dev:latest
          resources:
            requests:
              cpu: 1000m
              memory: 4096Mi
            limits:
              cpu: 1000m
              memory: 4096Mi
        pod:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                - matchExpressions:
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512F
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512DQ
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512BW
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA
                    operator: Exists
                  - key: feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2
                    operator: Exists

Or using preferredDuringSchedulingIgnoredDuringExecution for best effort scheduling, or not doing any node affinity, just doing the random scheduling. The CryptoMB private key provider supports software fallback if the required CPU instructions aren’t here.

Benchmark before enabling private key provider

First follow the instructions in TLS Termination for TCP to do the functionality test.

Ensure the cpu frequency governor set as performance.

export NUM_CPUS=`lscpu | grep "^CPU(s):"|awk '{print $2}'`
for i in `seq 0 1 $NUM_CPUS`; do sudo cpufreq-set -c $i -g performance; done

Using the nodeport as the example, fetch the node port from envoy gateway service.

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
export NODE_PORT=$(kubectl -n envoy-gateway-system get svc/$ENVOY_SERVICE -o jsonpath='{.spec.ports[0].nodePort}')
echo "127.0.0.1 www.example.com" >> /etc/hosts

Benchmark the gateway with fortio.

fortio load -c 10 -k -qps 0 -t 30s -keepalive=false https://www.example.com:${NODE_PORT}

Apply EnvoyPatchPolicy to enable private key provider

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: key-provider-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: add
        path: "/tls_certificate/private_key_provider"
        value:
          provider_name: qat
          typed_config:
            "@type": "type.googleapis.com/envoy.extensions.private_key_providers.qat.v3alpha.QatPrivateKeyMethodConfig"
            private_key:
              inline_string: |
                abcd
            poll_delay: 0.001s
          fallback: true
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: copy
        from: "/tls_certificate/private_key"
        path: "/tls_certificate/private_key_provider/typed_config/private_key"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: key-provider-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: add
        path: "/tls_certificate/private_key_provider"
        value:
          provider_name: qat
          typed_config:
            "@type": "type.googleapis.com/envoy.extensions.private_key_providers.qat.v3alpha.QatPrivateKeyMethodConfig"
            private_key:
              inline_string: |
                abcd                
            poll_delay: 0.001s
          fallback: true
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: copy
        from: "/tls_certificate/private_key"
        path: "/tls_certificate/private_key_provider/typed_config/private_key"
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: key-provider-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: add
        path: "/tls_certificate/private_key_provider"
        value:
          provider_name: cryptomb
          typed_config:  
            "@type": "type.googleapis.com/envoy.extensions.private_key_providers.cryptomb.v3alpha.CryptoMbPrivateKeyMethodConfig"
            private_key:
              inline_string: |
                abcd
            poll_delay: 0.001s
          fallback: true
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: copy
        from: "/tls_certificate/private_key"
        path: "/tls_certificate/private_key_provider/typed_config/private_key"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: key-provider-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: add
        path: "/tls_certificate/private_key_provider"
        value:
          provider_name: cryptomb
          typed_config:  
            "@type": "type.googleapis.com/envoy.extensions.private_key_providers.cryptomb.v3alpha.CryptoMbPrivateKeyMethodConfig"
            private_key:
              inline_string: |
                abcd                
            poll_delay: 0.001s
          fallback: true
    - type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
      name: default/example-cert
      operation:
        op: copy
        from: "/tls_certificate/private_key"
        path: "/tls_certificate/private_key_provider/typed_config/private_key"

Benchmark after enabling private key provider

First follow the instructions in TLS Termination for TCP to do the functionality test again.

Benchmark the gateway with fortio.

fortio load -c 64 -k -qps 0 -t 30s -keepalive=false https://www.example.com:${NODE_PORT}

Benchmark Result

You will see a performance boost after private key provider enabled. For example, you will get results as below.

Without private key provider:

All done 43069 calls (plus 10 warmup) 6.966 ms avg, 1435.4 qps

With QAT private key provider, the QPS is over 3 times than without private key provider

All done 134746 calls (plus 128 warmup) 28.505 ms avg, 4489.6 qps

With CryptoMB private key provider, the QPS is over 2 times than without private key provider.

All done 93983 calls (plus 128 warmup) 40.880 ms avg, 3130.5 qps

3.2 - Backend Mutual TLS: Gateway to Backend

This task demonstrates how mTLS can be achieved between the Gateway and a backend. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.

Envoy Gateway supports the Gateway-API defined BackendTLSPolicy to establish TLS. For mTLS, the Gateway must authenticate by presenting a client certificate to the backend.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Backend TLS to install Envoy Gateway and configure TLS to the backend server.

TLS Certificates

Generate the certificates and keys used by the Gateway for authentication against the backend.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout clientca.key -out clientca.crt

Create a certificate and a private key for www.example.com:

openssl req -new -newkey rsa:2048 -nodes -keyout client.key -out client.csr -subj "/CN=example-client/O=example organization"
openssl x509 -req -days 365 -CA clientca.crt -CAkey clientca.key -set_serial 0 -in client.csr -out client.crt

Store the cert/key in a Secret:

kubectl -n envoy-gateway-system create secret tls example-client-cert --key=client.key --cert=client.crt

Store the CA Cert in another Secret:

kubectl create configmap example-client-ca --from-file=clientca.crt

Enforce Client Certificate Authentication on the backend

Patch the existing quickstart backend to enforce Client Certificate Authentication. The patch will mount the server certificate and key required for TLS, and the CA certificate into the backend as volumes.

kubectl patch deployment backend --type=json --patch '
  - op: add
    path: /spec/template/spec/containers/0/volumeMounts
    value:
    - name: client-certs-volume
      mountPath: /etc/client-certs
    - name: secret-volume
      mountPath: /etc/secret-volume      
  - op: add
    path: /spec/template/spec/volumes
    value:
    - name: client-certs-volume
      configMap:
        name: example-client-ca
        items:
        - key: clientca.crt
          path: crt
    - name: secret-volume
      secret:
        secretName: example-cert
        items:
        - key: tls.crt
          path: crt
        - key: tls.key
          path: key          
  - op: add
    path: /spec/template/spec/containers/0/env/-
    value:
      name: TLS_CLIENT_CACERTS
      value: /etc/client-certs/crt
  '

Configure Envoy Proxy to use a client certificate

In addition to enablement of backend TLS with the Gateway-API BackendTLSPolicy, Envoy Gateway supports customizing TLS parameters such as TLS Client Certificate. To achieve this, the EnvoyProxy resource can be used to specify a TLS Client Certificate.

First, you need to add ParametersRef in GatewayClass, and refer to EnvoyProxy Config:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: envoy-gateway-system
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: envoy-gateway-system
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  backendTLS:
    clientCertificateRef: 
      kind: Secret
      name: example-client-cert
      namespace: envoy-gateway-system
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  backendTLS:
    clientCertificateRef:
      kind: Secret
      name: example-client-cert
      namespace: envoy-gateway-system

Testing mTLS

Query the TLS-enabled backend through Envoy proxy:

curl -v -HHost:www.example.com --resolve "www.example.com:80:127.0.0.1" \
http://www.example.com:80/get

Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend. The response now contains a “peerCertificates” attribute that reflects the client certificate used by the Gateway to establish mTLS with the backend.

< HTTP/1.1 200 OK
[...]
 "tls": {
  "version": "TLSv1.2",
  "serverName": "www.example.com",
  "negotiatedProtocol": "http/1.1",
  "cipherSuite": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
  "peerCertificates": ["-----BEGIN CERTIFICATE-----\n[...]-----END CERTIFICATE-----\n"]
 }

3.3 - Backend TLS: Gateway to Backend

This task demonstrates how TLS can be achieved between the Gateway and a backend. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.

Envoy Gateway supports the Gateway-API defined BackendTLSPolicy.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

TLS Certificates

Generate the certificates and keys used by the backend to terminate TLS connections from the Gateways.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout ca.key -out ca.crt

Create a certificate and a private key for www.example.com.

First, create an openssl configuration file:

cat > openssl.conf  <<EOF
[req]
req_extensions = v3_req
prompt = no

[v3_req]
keyUsage = keyEncipherment, digitalSignature
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = www.example.com
EOF

Then create a certificate using this openssl configuration file:

openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA ca.crt -CAkey ca.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt -extfile openssl.conf -extensions v3_req

Note that the certificate must contain a DNS SAN for the relevant domain.

Store the cert/key in a Secret:

kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt

Store the CA Cert in another Secret:

kubectl create configmap example-ca --from-file=ca.crt

Setup TLS on the backend

Patch the existing quickstart backend to enable TLS. The patch will mount the TLS certificate secret into the backend as volume.

kubectl patch deployment backend --type=json --patch '
  - op: add
    path: /spec/template/spec/containers/0/volumeMounts
    value:
    - name: secret-volume
      mountPath: /etc/secret-volume
  - op: add
    path: /spec/template/spec/volumes
    value:
    - name: secret-volume
      secret:
        secretName: example-cert
        items:
        - key: tls.crt
          path: crt
        - key: tls.key
          path: key
  - op: add
    path: /spec/template/spec/containers/0/env/-
    value:
      name: TLS_SERVER_CERT
      value: /etc/secret-volume/crt
  - op: add
    path: /spec/template/spec/containers/0/env/-
    value:
      name: TLS_SERVER_PRIVKEY
      value: /etc/secret-volume/key
  '

Create a service that exposes port 443 on the backend service.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  labels:
    app: backend
    service: backend
  name: tls-backend
  namespace: default
spec:
  selector:
    app: backend
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8443
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: backend
    service: backend
  name: tls-backend
  namespace: default
spec:
  selector:
    app: backend
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8443

Create a BackendTLSPolicy instructing Envoy Gateway to establish a TLS connection with the backend and validate the backend certificate is issued by a trusted CA and contains an appropriate DNS SAN.

Note: SectionName is an optional field that specifies the name of the port in the target backend. This example uses a Kubernetes Service as the backend target, so the sectionName is set to https to match the port name in the Service. If the target is a Backend resource, the sectionName field should be set to the port number of the backend.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: enable-backend-tls
  namespace: default
spec:
  targetRefs:
  - group: ''
    kind: Service
    name: tls-backend
    sectionName: https
  validation:
    caCertificateRefs:
    - name: example-ca
      group: ''
      kind: ConfigMap
    hostname: www.example.com
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: enable-backend-tls
  namespace: default
spec:
  targetRefs:
  - group: ''
    kind: Service
    name: tls-backend
    sectionName: https
  validation:
    caCertificateRefs:
    - name: example-ca
      group: ''
      kind: ConfigMap
    hostname: www.example.com

Patch the HTTPRoute’s backend reference, so that it refers to the new TLS-enabled service:

kubectl patch HTTPRoute backend --type=json --patch '
  - op: replace
    path: /spec/rules/0/backendRefs/0/port
    value: 443
  - op: replace
    path: /spec/rules/0/backendRefs/0/name
    value: tls-backend
  '

Verify the HTTPRoute status:

kubectl get HTTPRoute backend -o yaml

Testing backend TLS

Get the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Query the example app through the Gateway:

curl -v -HHost:www.example.com --resolve "www.example.com:80:${GATEWAY_HOST}" \
http://www.example.com:80/get

Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend:

< HTTP/1.1 200 OK
[...]
 "tls": {
  "version": "TLSv1.2",
  "serverName": "www.example.com",
  "negotiatedProtocol": "http/1.1",
  "cipherSuite": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
 }

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 80:80 &

Query the TLS-enabled backend through Envoy proxy:

curl -v -HHost:www.example.com --resolve "www.example.com:80:127.0.0.1" \
http://www.example.com:80/get

Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend:

< HTTP/1.1 200 OK
[...]
 "tls": {
  "version": "TLSv1.2",
  "serverName": "www.example.com",
  "negotiatedProtocol": "http/1.1",
  "cipherSuite": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
 }

Customize backend TLS Parameters

In addition to enablement of backend TLS with the Gateway-API BackendTLSPolicy, Envoy Gateway supports customizing TLS parameters. To achieve this, the EnvoyProxy resource can be used to specify TLS parameters. We will customize the TLS version in this example.

First, you need to add ParametersRef in GatewayClass, and refer to EnvoyProxy Config:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: envoy-gateway-system
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: envoy-gateway-system

You can customize the EnvoyProxy Backend TLS Parameters via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  backendTLS:
    minVersion: "1.3"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  backendTLS:
    MinVersion: "1.3"

Testing TLS Parameters

Query the TLS-enabled backend through Envoy proxy:

curl -v -HHost:www.example.com --resolve "www.example.com:80:127.0.0.1" \
http://www.example.com:80/get

Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend. The TLS version is now TLS1.3, as configured in the EnvoyProxy resource. The TLS cipher is also changed, since TLS1.3 supports different ciphers from TLS1.2.

< HTTP/1.1 200 OK
[...]
 "tls": {
  "version": "TLSv1.3",
  "serverName": "www.example.com",
  "negotiatedProtocol": "http/1.1",
  "cipherSuite": "TLS_AES_128_GCM_SHA256"
 }

3.4 - Basic Authentication

This task provides instructions for configuring HTTP Basic authentication. HTTP Basic authentication checks if an incoming request has a valid username and password before routing the request to a backend service.

Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure HTTP Basic authentication. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Configuration

Envoy Gateway uses .htpasswd format to store the username-password pairs for authentication. The file must be stored in a kubernetes secret and referenced in the SecurityPolicy configuration. The secret is an Opaque secret, and the username-password pairs must be stored in the key “.htpasswd”.

Create a root certificate

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt

Create a certificate secret

Create a certificate and a private key for www.example.com:

openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt

Create certificate

kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt

Enable HTTPS

Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443 and references the example-cert Secret:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - kind: Secret
            group: ""
            name: example-cert
  '

Create a .htpasswd file

First, create a .htpasswd file with the username and password you want to use for authentication.

Note: Please always use HTTPS with Basic Authentication. This prevents credentials from being transmitted in plain text.

The input password won’t be saved, instead, a hash will be generated and saved in the output file. When a request tries to access protected resources, the password in the “Authorization” HTTP header will be hashed and compared with the saved hash.

Note: only SHA hash algorithm is supported for now.

htpasswd -cbs .htpasswd foo bar

You can also add more users to the file:

htpasswd -bs .htpasswd foo1 bar1

Create a basic-auth secret

Next, create a kubernetes secret with the generated .htpasswd file in the previous step.

kubectl create secret generic basic-auth --from-file=.htpasswd

Create a SecurityPolicy

The below example defines a SecurityPolicy that authenticates requests against the user list in the kubernetes secret generated in the previous step.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: basic-auth-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  basicAuth:
    users:
      name: "basic-auth"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: basic-auth-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: backend
  basicAuth:
    users:
      name: "basic-auth"

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/basic-auth-example -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Send a request to the backend service without Authentication header:

curl -kv -H "Host: www.example.com" "https://${GATEWAY_HOST}/" 

You should see 401 Unauthorized in the response, indicating that the request is not allowed without authentication.

* Connected to 127.0.0.1 (127.0.0.1) port 443
...
* Server certificate:
*  subject: CN=www.example.com; O=example organization
*  issuer: O=example Inc.; CN=example.com
> GET / HTTP/2
> Host: www.example.com
> User-Agent: curl/8.6.0
> Accept: */*
...
< HTTP/2 401
< content-length: 58
< content-type: text/plain
< date: Wed, 06 Mar 2024 15:59:36 GMT
<

* Connection #0 to host 127.0.0.1 left intact
User authentication failed. Missing username and password.

Send a request to the backend service with Authentication header:

curl -kv -H "Host: www.example.com" -u 'foo:bar' "https://${GATEWAY_HOST}/" 

The request should be allowed and you should see the response from the backend service.

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the SecurityPolicy and the secret

kubectl delete securitypolicy/basic-auth-example
kubectl delete secret/basic-auth
kubectl delete secret/example-cert

Next Steps

Checkout the Developer Guide to get involved in the project.

3.5 - CORS

This task provides instructions for configuring Cross-Origin Resource Sharing (CORS) on Envoy Gateway. CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure CORS. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Configuration

When configuring CORS either an origin with a precise hostname can be configured or an hostname containing a wildcard prefix, allowing all subdomains of the specified hostname. In addition to that the entire origin (with or without specifying a scheme) can be a wildcard to allow all origins.

The below example defines a SecurityPolicy that allows CORS for all HTTP requests originating from www.foo.com.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: cors-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  cors:
    allowOrigins:
    - "http://*.foo.com"
    - "http://*.foo.com:80"
    allowMethods:
    - GET
    - POST
    allowHeaders:
    - "x-header-1"
    - "x-header-2"
    exposeHeaders:
    - "x-header-3"
    - "x-header-4"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: cors-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  cors:
    allowOrigins:
    - "http://*.foo.com"
    - "http://*.foo.com:80"
    allowMethods:
    - GET
    - POST
    allowHeaders:
    - "x-header-1"
    - "x-header-2"
    exposeHeaders:
    - "x-header-3"
    - "x-header-4"

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/cors-example -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Verify that the CORS headers are present in the response of the OPTIONS request from http://www.foo.com:

curl -H "Origin: http://www.foo.com" \
  -H "Host: www.example.com" \
  -H "Access-Control-Request-Method: GET" \
  -X OPTIONS -v -s \
  http://$GATEWAY_HOST \
  1> /dev/null

You should see the below response, indicating that the request from http://www.foo.com is allowed:

< access-control-allow-origin: http://www.foo.com
< access-control-allow-methods: GET, POST
< access-control-allow-headers: x-header-1, x-header-2
< access-control-max-age: 86400
< access-control-expose-headers: x-header-3, x-header-4

If you try to send a request from http://www.bar.com, you should see the below response:

curl -H "Origin: http://www.bar.com" \
  -H "Host: www.example.com" \
  -H "Access-Control-Request-Method: GET" \
  -X OPTIONS -v -s \
  http://$GATEWAY_HOST \
  1> /dev/null

You won’t see any CORS headers in the response, indicating that the request from http://www.bar.com was not allowed.

If you try to send a request from http://www.foo.com:8080, you should also see similar response because the port number 8080 is not included in the allowed origins.

```shell
curl -H "Origin: http://www.foo.com:8080" \
  -H "Host: www.example.com" \
  -H "Access-Control-Request-Method: GET" \
  -X OPTIONS -v -s \
  http://$GATEWAY_HOST \
  1> /dev/null

Note:

  • CORS specification requires that the browsers to send a preflight request to the server to ask if it’s allowed to access the limited resource in another domains. The browsers are supposed to follow the response from the server to determine whether to send the actual request or not. The CORS filter only response to the preflight requests according to its configuration. It won’t deny any requests. The browsers are responsible for enforcing the CORS policy.
  • The targeted HTTPRoute or the HTTPRoutes that the targeted Gateway routes to must allow the OPTIONS method for the CORS filter to work. Otherwise, the OPTIONS request won’t match the routes and the CORS filter won’t be invoked.

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the SecurityPolicy:

kubectl delete securitypolicy/cors-example

Next Steps

Checkout the Developer Guide to get involved in the project.

3.6 - External Authorization

This task provides instructions for configuring external authentication.

External authorization calls an external HTTP or gRPC service to check whether an incoming HTTP request is authorized or not. If the request is deemed unauthorized, then the request will be denied with a 403 (Forbidden) response. If the request is authorized, then the request will be allowed to proceed to the backend service.

Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure external authorization. This instantiated resource can be linked to a Gateway and HTTPRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

HTTP External Authorization Service

Installation

Install a demo HTTP service that will be used as the external authorization service:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-http-service.yaml

Create a new HTTPRoute resource to route traffic on the path /myapp to the backend service.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000   
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000   

Verify the HTTPRoute status:

kubectl get httproute/myapp -o yaml

Configuration

Create a new SecurityPolicy resource to configure the external authorization. This SecurityPolicy targets the HTTPRoute “myApp” created in the previous step. It calls the HTTP external authorization service “http-ext-auth” on port 9002 for authorization. The headersToBackend field specifies the headers that will be sent to the backend service if the request is successfully authorized.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: ext-auth-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  extAuth:
    http:
      backendRefs:
        - name: http-ext-auth
          port: 9002
      headersToBackend: ["x-current-user"]
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: ext-auth-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  extAuth:
    http:
      backendRefs:
        - name: http-ext-auth
          port: 9002
      headersToBackend: ["x-current-user"]

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/ext-auth-example -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Send a request to the backend service without Authentication header:

curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/myapp"

You should see 403 Forbidden in the response, indicating that the request is not allowed without authentication.

* Connected to 172.18.255.200 (172.18.255.200) port 80 (#0)
> GET /myapp HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.68.0
> Accept: */*
...
< HTTP/1.1 403 Forbidden
< date: Mon, 11 Mar 2024 03:41:15 GMT
< x-envoy-upstream-service-time: 0
< content-length: 0
< 
* Connection #0 to host 172.18.255.200 left intact

Send a request to the backend service with Authentication header:

curl -v -H "Host: www.example.com" -H "Authorization: Bearer token1" "http://${GATEWAY_HOST}/myapp"

The request should be allowed and you should see the response from the backend service. Because the x-current-user header from the auth response has been sent to the backend service, you should see the x-current-user header in the response.

"X-Current-User": [
   "user1"
  ],

GRPC External Authorization Service

Installation

Install a demo gRPC service that will be used as the external authorization service. The demo gRPC service is enabled with TLS and a BackendTLSConfig is created to configure the communication between the Envoy proxy and the gRPC service.

Note: TLS is optional for HTTP or gRPC external authorization services. However, enabling TLS is recommended for enhanced security in production environments.

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-grpc-service.yaml

The HTTPRoute created in the previous section is still valid and can be used with the gRPC auth service, but if you have not created the HTTPRoute, you can create it now.

Create a new HTTPRoute resource to route traffic on the path /myapp to the backend service.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000   
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000   

Verify the HTTPRoute status:

kubectl get httproute/myapp -o yaml

Configuration

Update the SecurityPolicy that was created in the previous section to use the gRPC external authorization service. It calls the gRPC external authorization service “grpc-ext-auth” on port 9002 for authorization.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: ext-auth-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  extAuth:
    grpc:
      backendRefs:
        - name: grpc-ext-auth
          port: 9002
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: ext-auth-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  extAuth:
    grpc:
      backendRefs:
        - name: grpc-ext-auth
          port: 9002

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/ext-auth-example -o yaml

Because the gRPC external authorization service is enabled with TLS, a BackendTLSConfig needs to be created to configure the communication between the Envoy proxy and the gRPC auth service.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: grpc-ext-auth-btls
spec:
  targetRefs:
  - group: ''
    kind: Service
    name: grpc-ext-auth
    sectionName: "9002"
  validation:
    caCertificateRefs:
    - name: grpc-ext-auth-ca
      group: ''
      kind: ConfigMap
    hostname: grpc-ext-auth
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: grpc-ext-auth-btls
spec:
  targetRefs:
  - group: ''
    kind: Service
    name: grpc-ext-auth
    sectionName: "9002"
  validation:
    caCertificateRefs:
    - name: grpc-ext-auth-ca
      group: ''
      kind: ConfigMap
    hostname: grpc-ext-auth

Verify the BackendTLSPolicy configuration:

kubectl get backendtlspolicy/grpc-ext-auth-btls -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Send a request to the backend service without Authentication header:

curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/myapp"

You should see 403 Forbidden in the response, indicating that the request is not allowed without authentication.

* Connected to 172.18.255.200 (172.18.255.200) port 80 (#0)
> GET /myapp HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.68.0
> Accept: */*
...
< HTTP/1.1 403 Forbidden
< date: Mon, 11 Mar 2024 03:41:15 GMT
< x-envoy-upstream-service-time: 0
< content-length: 0
< 
* Connection #0 to host 172.18.255.200 left intact

Send a request to the backend service with Authentication header:

curl -v -H "Host: www.example.com" -H "Authorization: Bearer token1" "http://${GATEWAY_HOST}/myapp"

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the demo auth services, HTTPRoute, SecurityPolicy and BackendTLSPolicy:

kubectl delete -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-http-service.yaml
kubectl delete -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-grpc-service.yaml
kubectl delete httproute/myapp
kubectl delete securitypolicy/ext-auth-example
kubectl delete backendtlspolicy/grpc-ext-auth-btls

Next Steps

Checkout the Developer Guide to get involved in the project.

3.7 - IP Allowlist/Denylist

This task provides instructions for configuring IP allowlist/denylist on Envoy Gateway. IP allowlist/denylist checks if an incoming request is from an allowed IP address before routing the request to a backend service.

Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure IP allowlist/denylist. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Configuration

Create a SecurityPolicy

The below SecurityPolicy restricts access to the backend service by allowing requests only from the IP addresses 10.0.1.0/24.

In this example, the default action is set to Deny, which means that only requests from the specified IP addresses with Allow action are allowed, and all other requests are denied. You can also change the default action to Allow to allow all requests except those from the specified IP addresses with Deny action.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: authorization-client-ip
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  authorization:
    defaultAction: Deny
    rules:
    - action: Allow
      principal:
        clientCIDRs:
        - 10.0.1.0/24
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: authorization-client-ip
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  authorization:
    defaultAction: Deny
    rules:
    - action: Allow
      principal:
        clientCIDRs:
        - 10.0.1.0/24

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/authorization-client-ip -o yaml

Original Source IP

It’s important to note that the IP address used for allowlist/denylist is the original source IP address of the request. You can use a ClientTrafficPolicy to configure how Envoy Gateway should determine the original source IP address.

For example, the below ClientTrafficPolicy configures Envoy Gateway to use the X-Forwarded-For header to determine the original source IP address. The numTrustedHops field specifies the number of trusted hops in the X-Forwarded-For header. In this example, the numTrustedHops is set to 1, which means that the first rightmost IP address in the X-Forwarded-For header is used as the original source IP address.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-client-ip-detection
spec:
  clientIPDetection:
    xForwardedFor:
      numTrustedHops: 1
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-client-ip-detection
spec:
  clientIPDetection:
    xForwardedFor:
      numTrustedHops: 1
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Send a request to the backend service without the X-Forwarded-For header:

curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/"

You should see 403 Forbidden in the response, indicating that the request is not allowed.

* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.8.0-DEV
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 403 Forbidden
< content-length: 19
< content-type: text/plain
< date: Mon, 08 Jul 2024 04:23:31 GMT
< 
* Connection #0 to host 172.18.255.200 left intact
RBAC: access denied

Send a request to the backend service with the X-Forwarded-For header:

curl -v -H "Host: www.example.com" -H "X-Forwarded-For: 10.0.1.1" "http://${GATEWAY_HOST}/"

The request should be allowed and you should see the response from the backend service.

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the SecurityPolicy and the ClientTrafficPolicy

kubectl delete securitypolicy/authorization-client-ip
kubectl delete clientTrafficPolicy/enable-client-ip-detection

Next Steps

Checkout the Developer Guide to get involved in the project.

3.8 - JWT Authentication

This task provides instructions for configuring JSON Web Token (JWT) authentication. JWT authentication checks if an incoming request has a valid JWT before routing the request to a backend service. Currently, Envoy Gateway only supports validating a JWT from an HTTP header, e.g. Authorization: Bearer <token>.

Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure JWT authentication. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

For GRPC - follow the steps from the GRPC Routing example.

Configuration

Allow requests with a valid JWT by creating an SecurityPolicy and attaching it to the example HTTPRoute or GRPCRoute.

HTTPRoute

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/jwt/jwt.yaml

Two HTTPRoute has been created, one for /foo and another for /bar. A SecurityPolicy has been created and targeted HTTPRoute foo to authenticate requests for /foo. The HTTPRoute bar is not targeted by the SecurityPolicy and will allow
unauthenticated requests to /bar.

Verify the HTTPRoute configuration and status:

kubectl get httproute/foo -o yaml
kubectl get httproute/bar -o yaml

The SecurityPolicy is configured for JWT authentication and uses a single JSON Web Key Set (JWKS) provider for authenticating the JWT.

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/jwt-example -o yaml

GRPCRoute

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/jwt/grpc-jwt.yaml

A SecurityPolicy has been created and targeted GRPCRoute yages to authenticate all requests for yages service..

Verify the GRPCRoute configuration and status:

kubectl get grpcroute/yages -o yaml

The SecurityPolicy is configured for JWT authentication and uses a single JSON Web Key Set (JWKS) provider for authenticating the JWT.

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/jwt-example -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

HTTPRoute

Verify that requests to /foo are denied without a JWT:

curl -sS -o /dev/null -H "Host: www.example.com" -w "%{http_code}\n" http://$GATEWAY_HOST/foo

A 401 HTTP response code should be returned.

Get the JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode

Note: The above command decodes and returns the token’s payload. You can replace f2 with f1 to view the token’s header.

Verify that a request to /foo with a valid JWT is allowed:

curl -sS -o /dev/null -H "Host: www.example.com" -H "Authorization: Bearer $TOKEN" -w "%{http_code}\n" http://$GATEWAY_HOST/foo

A 200 HTTP response code should be returned.

Verify that requests to /bar are allowed without a JWT:

curl -sS -o /dev/null -H "Host: www.example.com" -w "%{http_code}\n" http://$GATEWAY_HOST/bar

GRPCRoute

Verify that requests to yagesservice are denied without a JWT:

grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

You should see the below response

Error invoking method "yages.Echo/Ping": rpc error: code = Unauthenticated desc = failed to query for service descriptor "yages.Echo": Jwt is missing

Get the JWT used for testing request authentication:

TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode

Note: The above command decodes and returns the token’s payload. You can replace f2 with f1 to view the token’s header.

Verify that a request to yages service with a valid JWT is allowed:

grpcurl -plaintext -H "authorization: Bearer $TOKEN" -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping

You should see the below response

{
  "text": "pong"
}

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the SecurityPolicy:

kubectl delete securitypolicy/jwt-example

Next Steps

Checkout the Developer Guide to get involved in the project.

3.9 - JWT Claim-Based Authorization

This task provides instructions for configuring JWT claim-based authorization. JWT claim-based authorization checks if an incoming request has the required JWT claims before routing the request to a backend service.

Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure JWT claim-based authorization.

This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Configuration

Create a SecurityPolicy

Please note that the JWT claim-based authorization requires the JWT token to be present in the request. A JWT authentication must be configured in the same SecurityPolicy to validate the JWT token and extract the claims.

The below SecurityPolicy configuration allows requests with a valid JWT token that has the following claims:

  • user.name claim with the value John Doe
  • user.roles claim with the value admin
  • scope claim with the values read, add, and modify
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: authorization-jwt-claim
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  jwt:
    providers:
    - name: example
      issuer: https://foo.bar.com
      remoteJWKS:
        uri: https://raw.githubusercontent.com/envoyproxy/gateway/refs/heads/main/examples/kubernetes/jwt/jwks.json
  authorization:
    defaultAction: Deny
    rules:
    - name: "allow"
      action: Allow
      principal:
        jwt:
          provider: example
          scopes: ["read", "add", "modify"]
          claims:
          - name: user.name
            values: ["John Doe"]
          - name: user.roles
            valueType: StringArray
            values: ["admin"]
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: authorization-jwt-claim
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  jwt:
    providers:
    - name: example
      issuer: https://foo.bar.com
      remoteJWKS:
        uri: https://raw.githubusercontent.com/envoyproxy/gateway/refs/heads/main/examples/kubernetes/jwt/jwks.json
  authorization:
    defaultAction: Deny
    rules:
    - name: "allow"
      action: Allow
      principal:
        jwt:
          provider: example
          scopes: ["read", "add", "modify"]
          claims:
          - name: user.name
            values: ["John Doe"]
          - name: user.roles
            valueType: StringArray
            values: ["admin"]

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/authorization-jwt-claim -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Define a JWT token with the required claims.

export VALID_TOKEN="eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImI1MjBiM2MyYzRiZDc1YTEwZTljZWJjOTU3NjkzM2RjIn0.eyJpc3MiOiJodHRwczovL2Zvby5iYXIuY29tIiwic3ViIjoiMTIzNDU2Nzg5MCIsInVzZXIiOnsibmFtZSI6IkpvaG4gRG9lIiwiZW1haWwiOiJqb2huLmRvZUBleGFtcGxlLmNvbSIsInJvbGVzIjpbImFkbWluIiwiZWRpdG9yIl19LCJwcmVtaXVtX3VzZXIiOnRydWUsImlhdCI6MTUxNjIzOTAyMiwic2NvcGUiOiJyZWFkIGFkZCBkZWxldGUgbW9kaWZ5In0.P36iAlmiRCC79OiB3vstF5Q_9OqUYAMGF3a3H492GlojbV6DcuOz8YIEYGsRSWc-BNJaBKlyvUKsKsGVPtYbbF8ajwZTs64wyO-zhd2R8riPkg_HsW7iwGswV12f5iVRpfQ4AG2owmdOToIaoch0aym89He1ZzEjcShr9olgqlAbbmhnk-namd1rP-xpzPnWhhIVI3mCz5hYYgDTMcM7qbokM5FzFttTRXAn5_Luor23U1062Ct_K53QArwxBvwJ-QYiqcBycHf-hh6sMx_941cUswrZucCpa-EwA3piATf9PKAyeeWHfHV9X-y8ipGOFg3mYMMVBuUZ1lBkJCik9f9kboRY6QzpOISARQj9PKMXfxZdIPNuGmA7msSNAXQgqkvbx04jMwb9U7eCEdGZztH4C8LhlRjgj0ZdD7eNbRjeH2F6zrWyMUpGWaWyq6rMuP98W2DWM5ZflK6qvT1c7FuFsWPvWLkgxQwTWQKrHdKwdbsu32Sj8VtUBJ0-ddEb"

Decode the JWT token to verify that it has the required claims.

jq -R 'split(".") | .[0],.[1] | @base64d | fromjson' <<< $(echo ${VALID_TOKEN})

The decoded JWT token should look like the following:

{
  "typ": "JWT",
  "alg": "RS256",
  "kid": "b520b3c2c4bd75a10e9cebc9576933dc"
}
{
  "iss": "https://foo.bar.com",
  "sub": "1234567890",
  "user": {
    "name": "John Doe",
    "email": "john.doe@example.com",
    "roles": [
      "admin",
      "editor"
    ]
  },
  "premium_user": true,
  "iat": 1516239022,
  "scope": "read add delete modify"
}

Send a request to the backend service with the valid JWT token:

curl  -H "Host: www.example.com" -H "Authorization: Bearer ${VALID_TOKEN}" "http://${GATEWAY_HOST}/"

The request should be allowed and you should see the response from the backend service.

Define a JWT token without the required claims.

export INVALID_TOKEN="eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImI1MjBiM2MyYzRiZDc1YTEwZTljZWJjOTU3NjkzM2RjIn0.eyJpc3MiOiJodHRwczovL2Zvby5iYXIuY29tIiwic3ViIjoiMTIzNDU2Nzg5MCIsInVzZXIiOnsibmFtZSI6IkFsaWNlIFNtaXRoIiwiZW1haWwiOiJhbGljZS5zbWl0aEBleGFtcGxlLmNvbSIsInJvbGVzIjpbImRldmVsb3BlciJdfSwicHJlbWl1bV91c2VyIjpmYWxzZSwiaWF0IjoxNTE2MjM5MDIyLCJzY29wZSI6InJlYWQgYWRkIGRlbGV0ZSJ9.Da547nNXzuQXm5E7LuLAiyFswXsW4RDhuitD_rpadtR7PTwzzOsJoqrVWJ_u1jJDaOTWIpLF4gwxDoY-Aoz_couzXzlAbECLs45ZFoc_UdffpfIbGKqTZx8VtwKuDLFsAeDDDqqx1flxFhvXHftJJdZYr1FgFz9u-absMmRU90DLmEZX3Hnyc8k8eBgeiu6vsWUD0-aNy8cWkFRbwRggkGmucFyUTG8Z1MY3iyH5E66W-ISoX8G9bzE9PTxVAAPDTvefD5iLJPSDJ8qV69OuMCJ8Dczq0L9Dd_w0sF-D1s9MTvexmGg4zBWluJ3r-pU9NHEdhqBypehp_yH8xF5Rt9AE7stZ4oPFZNyfrtkE-4IOnSEkMmzcC65g_rscn0ycerv4N5ZNpkr0x2IYYM4iGuo-ULv5Htnli3rffST45kx1XA8cdsrT1D0K3aPxdIxDIk8sTJf5-WVqRyo-bwxXXltwQLB9jCM_7QbTWQBYAJwUpi-0RW4jCl44-42gZnXf"

Decode the JWT token to verify that it does not have the required claims.

jq -R 'split(".") | .[0],.[1] | @base64d | fromjson' <<< $(echo ${INVALID_TOKEN})

The decoded JWT token should look like the following:

{
  "typ": "JWT",
  "alg": "RS256",
  "kid": "b520b3c2c4bd75a10e9cebc9576933dc"
}
{
  "iss": "https://foo.bar.com",
  "sub": "1234567890",
  "user": {
    "name": "Alice Smith",
    "email": "alice.smith@example.com",
    "roles": [
      "developer"
    ]
  },
  "premium_user": false,
  "iat": 1516239022,
  "scope": "read add delete"
}

Send a request to the backend service with the invalid JWT token:

curl  -v -H "Host: www.example.com" -H "Authorization: Bearer ${INVALID_TOKEN}" "http://${GATEWAY_HOST}/"

The request should be denied and you should see a 403 Forbidden response.

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the SecurityPolicy and the ClientTrafficPolicy

kubectl delete securitypolicy/authorization-jwt-claim

Next Steps

Checkout the Developer Guide to get involved in the project.

3.10 - Mutual TLS: External Clients to the Gateway

This task demonstrates how mutual TLS can be achieved between external clients and the Gateway. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

TLS Certificates

Generate the certificates and keys used by the Gateway to terminate client TLS connections.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt

Create a certificate and a private key for www.example.com:

openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt

Store the cert/key in a Secret:

kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt --certificate-authority=example.com.crt

Store the CA Cert in another Secret:

kubectl create secret generic example-ca-cert --from-file=ca.crt=example.com.crt

Create a certificate and a private key for the client client.example.com:

openssl req -out client.example.com.csr -newkey rsa:2048 -nodes -keyout client.example.com.key -subj "/CN=client.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in client.example.com.csr -out client.example.com.crt

Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443 and references the example-cert Secret:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - kind: Secret
            group: ""
            name: example-cert
  '

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Create a ClientTrafficPolicy to enforce client validation using the CA Certificate as a trusted anchor.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-mtls
  namespace: default
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  tls:
    clientValidation:
      caCertificateRefs:
      - kind: "Secret"
        group: ""
        name: "example-ca-cert"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enable-mtls
  namespace: default
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  tls:
    clientValidation:
      caCertificateRefs:
      - kind: "Secret"
        group: ""
        name: "example-ca-cert"

Testing

Get the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Query the example app through the Gateway:

curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cert client.example.com.crt --key client.example.com.key \
--cacert example.com.crt https://www.example.com/get

Don’t specify the client key and certificate in the above command, and ensure that the connection fails:

curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &

Query the example app through Envoy proxy:

curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cert client.example.com.crt --key client.example.com.key \
--cacert example.com.crt https://www.example.com:8443/get

3.11 - OIDC Authentication

This task provides instructions for configuring OpenID Connect (OIDC) authentication. OpenID Connect (OIDC) is an authentication standard built on top of OAuth 2.0. It enables EG to rely on authentication that is performed by an OpenID Connect Provider (OP) to verify the identity of a user.

Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure OIDC authentication. This instantiated resource can be linked to a Gateway and HTTPRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

EG OIDC authentication requires the redirect URL to be HTTPS. Follow the Secure Gateways guide to generate the TLS certificates and update the Gateway configuration to add an HTTPS listener.

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Let’s create an HTTPRoute that represents an application protected by OIDC.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames: ["www.example.com"]
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames: ["www.example.com"]
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000

Verify the HTTPRoute status:

kubectl get httproute/myapp -o yaml

OIDC Authentication for a HTTPRoute

OIDC can be configured at the Gateway level to authenticate all the HTTPRoutes that are associated with the Gateway with the same OIDC configuration, or at the HTTPRoute level to authenticate each HTTPRoute with different OIDC configurations.

This section demonstrates how to configure OIDC authentication for a specific HTTPRoute.

Register an OIDC application

This task uses Google as the OIDC provider to demonstrate the configuration of OIDC. However, EG works with any OIDC providers, including Auth0, Azure AD, Keycloak, Okta, OneLogin, Salesforce, UAA, etc.

Follow the steps in the Google OIDC documentation to register an OIDC application. Please make sure the redirect URL is set to the one you configured in the SecurityPolicy that you will create in the step below. In this example, the redirect URL is https://www.example.com:8443/myapp/oauth2/callback.

After registering the application, you should have the following information:

  • Client ID: The client ID of the OIDC application.
  • Client Secret: The client secret of the OIDC application.

Create a kubernetes secret

Next, create a kubernetes secret with the Client Secret created in the previous step. The secret is an Opaque secret, and the Client Secret must be stored in the key “client-secret”.

Note: please replace the ${CLIENT_SECRET} with the actual Client Secret that you got from the previous step.

kubectl create secret generic my-app-client-secret --from-literal=client-secret=${CLIENT_SECRET}

Create a SecurityPolicy

Please notice that the redirectURL and logoutPath must match the target HTTPRoute. In this example, the target HTTPRoute is configured to match the host www.example.com and the path /myapp, so the redirectURL must be prefixed with https://www.example.com:8443/myapp, and logoutPath must be prefixed with/myapp, otherwise the OIDC authentication will fail because the redirect and logout requests will not match the target HTTPRoute and therefore can’t be processed by the OAuth2 filter on that HTTPRoute.

Note: please replace the ${CLIENT_ID} in the below yaml snippet with the actual Client ID that you got from the OIDC provider.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oidc-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  oidc:
    provider:
      issuer: "https://accounts.google.com"
    clientID: "${CLIENT_ID}"
    clientSecret:
      name: "my-app-client-secret"
    redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
    logoutPath: "/myapp/logout"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oidc-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  oidc:
    provider:
      issuer: "https://accounts.google.com"
    clientID: "${CLIENT_ID}"
    clientSecret:
      name: "my-app-client-secret"
    redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
    logoutPath: "/myapp/logout"

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/oidc-example -o yaml

Testing

Port forward gateway port to localhost:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443

Put www.example.com in the /etc/hosts file in your test machine, so we can use this host name to access the gateway from a browser:

...
127.0.0.1 www.example.com

Open a browser and navigate to the https://www.example.com:8443/myapp address. You should be redirected to the Google login page. After you successfully login, you should see the response from the backend service.

Clean the cookies in the browser and try to access https://www.example.com:8443/foo address. You should be able to see this page since the path /foo is not protected by the OIDC policy.

OIDC Authentication for a Gateway

OIDC can be configured at the Gateway level to authenticate all the HTTPRoutes that are associated with the Gateway with the same OIDC configuration, or at the HTTPRoute level to authenticate each HTTPRoute with different OIDC configurations.

This section demonstrates how to configure OIDC authentication for a Gateway.

Register an OIDC application

If you haven’t registered an OIDC application, follow the steps in the previous section to register an OIDC application.

Create a kubernetes secret

If you haven’t created a kubernetes secret, follow the steps in the previous section to create a kubernetes secret.

Create an HTTPRoute with a different subdomain

Let’s create another HTTPRoute in the same Gateway, but with a different subdomain.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: foo
spec:
  parentRefs:
  - name: eg
  hostnames: ["foo.example.com"]
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /
    backendRefs:
    - name: backend
      port: 3000
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: foo
spec:
  parentRefs:
  - name: eg
  hostnames: ["foo.example.com"]
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /foo
    backendRefs:
    - name: backend
      port: 3000

Verify the HTTPRoute status:

kubectl get httproute/foo -o yaml

Create a SecurityPolicy

Create or update the SecurityPolicy to target the Gateway instead of the HTTPRoute. Please notice that the redirectURL and logoutPath must match one of the HTTPRoutes associated with the Gateway. In this example, the target Gateway has three HTTPRoutes associated with it, one with the host www.example.com and the path /myapp, one with the host www.example.com and the path /, and one with the host foo.example.com and the path /. Any of these HTTPRoutes can be used to match the redirectURL and logoutPath.

By default, the access token and ID token cookies are set to the host of the request, excluding subdomains. To allow the token cookies to be shared across subdomains and prevent users from having to log in again when switching between subdomains, the cookieDomain field needs to be set to the root domain. In this example, the root domain is example.com.

Note: if a cookieDomain is added to an existing SecurityPolicy, the cookies in the browser must be cleared before sending a new request to the Gateway, otherwise the cookies with the old subdomain will take precedence and be sent to the Gateway, causing the OIDC authentication to fail.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oidc-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  oidc:
    provider:
      issuer: "https://accounts.google.com"
    clientID: "${CLIENT_ID}"
    clientSecret:
      name: "my-app-client-secret"
    redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
    logoutPath: "/myapp/logout"
    cookieDomain: "example.com"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oidc-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  oidc:
    provider:
      issuer: "https://accounts.google.com"
    clientID: "${CLIENT_ID}"
    clientSecret:
      name: "my-app-client-secret"
    redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
    logoutPath: "/myapp/logout"
    cookieDomain: "example.com"

Verify the SecurityPolicy configuration:

kubectl get securitypolicy/oidc-example -o yaml

Update the Listener TLS certificate to support multiple subdomains

Create a multi-domain wildcard certificate for *.example.com.

openssl req -out wildcard.csr -newkey rsa:2048 -nodes -keyout wildcard.key -subj "/CN=*.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in wildcard.csr -out wildcard.crt

Replace the TLS certificate of the Gateway with the wildcard certificate.

kubectl delete secret example-cert
kubectl create secret tls example-cert --key=wildcard.key --cert=wildcard.crt

Testing

If you haven’t done so, follow the steps in the previous section to port forward gateway port to localhost and put www.example.com in the /etc/hosts file in your test machine.

Also, put foo.example.com in the /etc/hosts file in your test machine.

...
127.0.0.1 foo.example.com

Open a browser and navigate to the https://www.example.com:8443/myapp address. You should be redirected to the Google login page. After you successfully login, you should see the response from the backend service.

You can also try to access https://foo.example.com:8443 and https://www.example.com:8443/bar addresses. You should be able to see the response from the backend service since these HTTPRoutes are also protected by the same OIDC config, and the cookies are shared across subdomains.

Connect to an OIDC Provider with Self-Signed Certificate

In some scenarios, the OIDC provider may use a self-signed certificate. To connect to an OIDC provider with a self-signed certificate, you need to configure it using the Backend resource within the SecurityPolicy. Additionally, use the BackendTLSPolicy to specify the CA certificate required to authenticate the OIDC provider.

The following example demonstrates how to configure the OIDC provider with a self-signed certificate.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oidc-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: myapp
  oidc:
    provider:
      backendRefs:
      - group: gateway.envoyproxy.io
        kind: Backend
        name: backend-keycloak
        port: 443
      backendSettings:
        retry:
          numRetries: 3
          perRetry:
            backOff:
              baseInterval: 1s
              maxInterval: 5s
          retryOn:
            triggers: ["5xx", "gateway-error", "reset"]
      issuer: "https://my.keycloak.com/realms/master"
      authorizationEndpoint: "https://my.keycloak.com/realms/master/protocol/openid-connect/auth"
      tokenEndpoint: "https://my.keycloak.com/realms/master/protocol/openid-connect/token"
    clientID: "${CLIENT_ID}"
    clientSecret:
      name: "my-app-client-secret"
    redirectURL: "http://www.example.com/myapp/oauth2/callback"
    logoutPath: "/myapp/logout"
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: backend-keycloak
spec:
  endpoints:
  - fqdn:
      hostname: 'my.keycloak.com'
      port: 443
---
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: policy-btls
spec:
  targetRefs:
  - group: gateway.envoyproxy.io
    kind: Backend
    name: backend-keycloak
    sectionName: "443"
  validation:
    caCertificateRefs:
    - name: backend-tls-certificate
      group: ""
      kind: ConfigMap
    hostname: my.keycloak.com
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
  name: oidc-example
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: myapp
  oidc:
    provider:
      backendRefs:
      - group: gateway.envoyproxy.io
        kind: Backend
        name: backend-keycloak
        port: 443
      backendSettings:
        retry:
          numRetries: 3
          perRetry:
            backOff:
              baseInterval: 1s
              maxInterval: 5s
          retryOn:
            triggers: ["5xx", "gateway-error", "reset"]
      issuer: "https://my.keycloak.com/realms/master"
      authorizationEndpoint: "https://my.keycloak.com/realms/master/protocol/openid-connect/auth"
      tokenEndpoint: "https://my.keycloak.com/realms/master/protocol/openid-connect/token"
    clientID: "${CLIENT_ID}"
    clientSecret:
      name: "my-app-client-secret"
    redirectURL: "http://www.example.com/myapp/oauth2/callback"
    logoutPath: "/myapp/logout"
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
  name: backend-keycloak
spec:
  endpoints:
  - fqdn:
      hostname: 'my.keycloak.com'
      port: 443
---
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: policy-btls
spec:
  targetRefs:
  - group: gateway.envoyproxy.io
    kind: Backend
    name: backend-keycloak
    sectionName: "443"
  validation:
    caCertificateRefs:
    - name: backend-tls-certificate
      group: ""
      kind: ConfigMap
    hostname: my.keycloak.com

For more information about Backend and BackendTLSPolicy, refer to the Backend Routing and Backend TLS: Gateway to Backend tasks.

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the SecurityPolicy, the secret and the HTTPRoute:

kubectl delete securitypolicy/oidc-example
kubectl delete secret/my-app-client-secret
kubectl delete httproute/myapp
kubectl delete httproute/foo

Next Steps

Checkout the Developer Guide to get involved in the project.

3.12 - Secure Gateways

This task will help you get started using secure Gateways. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

TLS Certificates

Generate the certificates and keys used by the Gateway to terminate client TLS connections.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt

Create a certificate and a private key for www.example.com:

openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt

Store the cert/key in a Secret:

kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt

Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443 and references the example-cert Secret:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
        - kind: Secret
          group: ""
          name: example-cert
  '

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Testing

Get the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Query the example app through the Gateway:

curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &

Query the example app through Envoy proxy:

curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get

Multiple HTTPS Listeners

Create a TLS cert/key for the additional HTTPS listener:

openssl req -out foo.example.com.csr -newkey rsa:2048 -nodes -keyout foo.example.com.key -subj "/CN=foo.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in foo.example.com.csr -out foo.example.com.crt

Store the cert/key in a Secret:

kubectl create secret tls foo-cert --key=foo.example.com.key --cert=foo.example.com.crt

Create another HTTPS listener on the example Gateway:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: https-foo
      protocol: HTTPS
      port: 443
      hostname: foo.example.com
      tls:
        mode: Terminate
        certificateRefs:
        - kind: Secret
          group: ""
          name: foo-cert
  '

Update the HTTPRoute to route traffic for hostname foo.example.com to the example backend service:

kubectl patch httproute backend --type=json --patch '
  - op: add
    path: /spec/hostnames/-
    value: foo.example.com
  '

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Follow the steps in the Testing section to test connectivity to the backend app through both Gateway listeners. Replace www.example.com with foo.example.com to test the new HTTPS listener.

Cross Namespace Certificate References

A Gateway can be configured to reference a certificate in a different namespace. This is allowed by a ReferenceGrant created in the target namespace. Without the ReferenceGrant, a cross-namespace reference is invalid.

Before proceeding, ensure you can query the HTTPS backend service from the Testing section.

To demonstrate cross namespace certificate references, create a ReferenceGrant that allows Gateways from the “default” namespace to reference Secrets in the “envoy-gateway-system” namespace:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
  name: example
  namespace: envoy-gateway-system
spec:
  from:
  - group: gateway.networking.k8s.io
    kind: Gateway
    namespace: default
  to:
  - group: ""
    kind: Secret
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
  name: example
  namespace: envoy-gateway-system
spec:
  from:
  - group: gateway.networking.k8s.io
    kind: Gateway
    namespace: default
  to:
  - group: ""
    kind: Secret

Delete the previously created Secret:

kubectl delete secret/example-cert

The Gateway HTTPS listener should now surface the Ready: False status condition and the example HTTPS backend should no longer be reachable through the Gateway.

kubectl get gateway/eg -o yaml

Recreate the example Secret in the envoy-gateway-system namespace:

kubectl create secret tls example-cert -n envoy-gateway-system --key=www.example.com.key --cert=www.example.com.crt

Update the Gateway HTTPS listener with namespace: envoy-gateway-system, for example:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      protocol: HTTP
      port: 80
    - name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - kind: Secret
            group: ""
            name: example-cert
            namespace: envoy-gateway-system
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      protocol: HTTP
      port: 80
    - name: https
      protocol: HTTPS
      port: 443
      tls:
        mode: Terminate
        certificateRefs:
          - kind: Secret
            group: ""
            name: example-cert
            namespace: envoy-gateway-system

The Gateway HTTPS listener status should now surface the Ready: True condition and you should once again be able to query the HTTPS backend through the Gateway.

Lastly, test connectivity using the above Testing section.

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the Secrets:

kubectl delete secret/example-cert
kubectl delete secret/foo-cert

RSA + ECDSA Dual stack certificates

This section gives a walkthrough to generate RSA and ECDSA derived certificates and keys for the Server, which can then be configured in the Gateway listener, to terminate TLS traffic.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Follow the steps in the TLS Certificates section to generate self-signed RSA derived Server certificate and private key, and configure those in the Gateway listener configuration to terminate HTTPS traffic.

Pre-checks

While testing in Cluster without External LoadBalancer Support, we can query the example app through Envoy proxy while enforcing an RSA cipher, as shown below:

curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get  -Isv --ciphers ECDHE-RSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2

Since the Secret configured at this point is an RSA based Secret, if we enforce the usage of an ECDSA cipher, the call should fail as follows

$ curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get  -Isv --ciphers ECDHE-ECDSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2

* Added www.example.com:8443:127.0.0.1 to DNS cache
* Hostname www.example.com was found in DNS cache
*   Trying 127.0.0.1:8443...
* Connected to www.example.com (127.0.0.1) port 8443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
* Cipher selection: ECDHE-ECDSA-CHACHA20-POLY1305
*  CAfile: example.com.crt
*  CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* error:1404B410:SSL routines:ST_CONNECT:sslv3 alert handshake failure
* Closing connection 0

Moving forward in the doc, we will be configuring the existing Gateway listener to accept both kinds of ciphers.

TLS Certificates

Reuse the CA certificate and key pair generated in the Secure Gateways task and use this CA to sign both RSA and ECDSA Server certificates. Note the CA certificate and key names are example.com.crt and example.com.key respectively.

Create an ECDSA certificate and a private key for www.example.com:

openssl ecparam -noout -genkey -name prime256v1 -out www.example.com.ecdsa.key
openssl req -new -SHA384 -key www.example.com.ecdsa.key -nodes -out www.example.com.ecdsa.csr -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -SHA384  -days 365 -in www.example.com.ecdsa.csr -CA example.com.crt -CAkey example.com.key -CAcreateserial -out www.example.com.ecdsa.crt

Store the cert/key in a Secret:

kubectl create secret tls example-cert-ecdsa --key=www.example.com.ecdsa.key --cert=www.example.com.ecdsa.crt

Patch the Gateway with this additional ECDSA Secret:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/1/tls/certificateRefs/-
    value:
      name: example-cert-ecdsa
  '

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Testing

Again, while testing in Cluster without External LoadBalancer Support, we can query the example app through Envoy proxy while enforcing an RSA cipher, which should work as it did before:

curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get  -Isv --ciphers ECDHE-RSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
...
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
...

Additionally, querying the example app while enforcing an ECDSA cipher should also work now:

curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get  -Isv --ciphers ECDHE-ECDSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
...
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-CHACHA20-POLY1305
...

SNI based Certificate selection

This sections gives a walkthrough to generate multiple certificates corresponding to different FQDNs. The same Gateway listener can then be configured to terminate TLS traffic for multiple FQDNs based on the SNI matching.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Follow the steps in the TLS Certificates section to generate self-signed RSA derived Server certificate and private key, and configure those in the Gateway listener configuration to terminate HTTPS traffic.

Additional Configurations

Using the TLS Certificates section, we first generate additional Secret for another Host www.sample.com.

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=sample Inc./CN=sample.com' -keyout sample.com.key -out sample.com.crt

openssl req -out www.sample.com.csr -newkey rsa:2048 -nodes -keyout www.sample.com.key -subj "/CN=www.sample.com/O=sample organization"
openssl x509 -req -days 365 -CA sample.com.crt -CAkey sample.com.key -set_serial 0 -in www.sample.com.csr -out www.sample.com.crt

kubectl create secret tls sample-cert --key=www.sample.com.key --cert=www.sample.com.crt

Note that all occurrences of example.com were just replaced with sample.com

Next we update the Gateway configuration to accommodate the new Certificate which will be used to Terminate TLS traffic:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/1/tls/certificateRefs/-
    value:
      name: sample-cert
  '

Finally, we update the HTTPRoute to route traffic for hostname www.sample.com to the example backend service:

kubectl patch httproute backend --type=json --patch '
  - op: add
    path: /spec/hostnames/-
    value: www.sample.com
  '

Testing

Refer to the steps mentioned earlier under Testing in clusters with External LoadBalancer Support

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &

Query the example app through Envoy proxy:

curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -I

Similarly, query the sample app through the same Envoy proxy:

curl -v -HHost:www.sample.com --resolve "www.sample.com:8443:127.0.0.1" \
--cacert sample.com.crt https://www.sample.com:8443/get -I

Since the multiple certificates are configured on the same Gateway listener, Envoy was able to provide the client with appropriate certificate based on the SNI in the client request.

Customize Gateway TLS Parameters

In addition to enablement of TLS with Gateway-API, Envoy Gateway supports customizing TLS parameters. To achieve this, the ClientTrafficPolicy resource can be used to specify TLS parameters. We will customize the minimum supported TLS version in this example to TLSv1.3.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enforce-tls-13
  namespace: default
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  tls:
    minVersion: "1.3"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
  name: enforce-tls-13
  namespace: default
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: Gateway
      name: eg
  tls:
    minVersion: "1.3"

Testing TLS Parameters

Attempt to connecting using an unsupported TLS version:

curl -v -HHost:www.sample.com --resolve "www.sample.com:8443:127.0.0.1" \
--cacert sample.com.crt --tlsv1.2 --tls-max 1.2 https://www.sample.com:8443/get -I

[...]

* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
* LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
* Closing connection
curl: (35) LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version

The output shows that the connection fails due to an unsupported TLS protocol version used by the client. Now, connect to the Gateway without specifying a client version, and note that the connection is established with TLSv1.3.

curl -v -HHost:www.sample.com --resolve "www.sample.com:8443:127.0.0.1" \
--cacert sample.com.crt https://www.sample.com:8443/get -I

[...]

* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF

Next Steps

Checkout the Developer Guide to get involved in the project.

3.13 - Threat Model

Envoy Gateway Threat Model and End User Recommendations

About

This work was performed by ControlPlane and commissioned by the Linux Foundation. ControlPlane is a global cloud native and open source cybersecurity consultancy, trusted as the partner of choice in securing: multinational banks; major public clouds; international financial institutions; critical national infrastructure programs; multinational oil and gas companies, healthcare and insurance providers; and global media firms.

Threat Modelling Team

James Callaghan, Torin van den Bulk, Eduardo Olarte

Reviewers

Arko Dasgupta, Matt Turner, Zack Butcher, Marco De Benedictis

Introduction

As we embrace the proliferation of microservice-based architectures in the cloud-native landscape, simplicity in setup and configuration becomes paramount as DevOps teams face the challenge of choosing between numerous similar technologies. One such choice which every team deploying to Kubernetes faces is what to use as an ingress controller. With a plethora of options available, and the existence of vendor-specific annotations leading to small inconsistencies between implementations, the Gateway API project was introduced by the SIG-NETWORK community, with the goal of eventually replacing the Ingress resource.

Envoy Gateway is configured by Gateway API resources, and serves as an intuitive and feature-rich wrapper over the widely acclaimed Envoy Proxy. With a convenient setup based on Kubernetes (K8s) manifests, Envoy Gateway streamlines the management of Envoy Proxy instances in an edge-proxy setting, reducing the operational overhead of managing low-level Envoy configurations. Envoy Gateway benefits cloud-native DevOps teams through its role-oriented configuration, providing granular control based on Role-Based Access Control (RBAC) principles. These features form the basis of our exploration into Envoy Gateway and the rich feature set it brings to the table.

In this threat model, we aim to provide an analysis of Envoy Gateway’s design components and their capabilities (at version 1.0) through a threat-driven approach. It should be noted that this does not constitute a security audit of the Envoy Gateway project, but instead focuses on different possible deployment topologies for Envoy Gateway with the goal of deriving recommendations and best practice guidance for end users.

The Envoy Gateway project recommends a multi-tenancy model whereby each tenant deploys their own Envoy Gateway controller in a namespace which they own. We will also explore the implications and risks associated with multiple tenants using a shared controller.

Scope

The primary focus of this threat model is to identify and assess security risks associated with deploying and operating Envoy Gateway within a multi-tenant Kubernetes (K8s) cluster. This model aims to provide a comprehensive understanding of the system, its transmission points, and potential vulnerabilities to enumerated threats.

In Scope

Envoy Gateway: As the primary focus of this threat model, all aspects of Envoy Gateway, including its configuration, deployment, and operation will be analysed. This includes how the gateway manages TLS certificates, authentication, service-to-service traffic routing, and more.

Kubernetes Cluster: Configuration and operation of the underlying Kubernetes cluster, including how it manages network policies, access control, and resource isolation for different namespaces/tenants in relation to Envoy will be considered.

Tenant Workloads: Tenant workloads (and the pods they run on) will be considered, focusing on how they interact with the Envoy Gateway and potential vulnerabilities that could be exploited.

Out of Scope

This threat model will not consider security risks associated with the underlying infrastructure (e.g., EC2 compute instances and S3 buckets) or non-Envoy related components within the Kubernetes Cluster. It will focus solely on the Envoy Gateway and its interaction with the Kubernetes cluster and tenant workloads.

Implementation of Envoy Gateway as an egress traffic controller is out of scope for this threat model and will not be considered in the report’s findings.

Introducing Envoy Gateway

Envoy Proxy Threat Model

Configuring Envoy as an Edge Proxy

Envoy Gateway Deployment Mode

Kubernetes Gateway API Security Model

Architecture Overview

Summary

To provide an in-depth look into both the system design and end-user deployment of Envoy Gateway, we will be focusing on the Deployment Architecture Diagram below.

The Deployment Architecture Diagram provides a high-level model of an end-user deployment of Envoy Gateway. For simplicity, we will look at different deployment topologies on a single multi-tenant Kubernetes cluster. Envoy Gateway operates as an edge proxy within this environment, handling the traffic flow between external interfaces and services within the cluster. The example will use two Envoy Gateway controllers - one dedicated controller for a single tenant, and one shared controller for two other tenants. Each Envoy Gateway controller will accept a single GatewayClass resource.

Deployment Architecture Diagram

As Envoy Gateway implements the Kubernetes GatewayAPI, this threat model will focus on the key objects in the Gateway API resource model:

  1. GatewayClass: defines a set of gateways with a commonconfiguration and behaviour. It is a cluster scoped resource.

  2. Gateway: requests a point where traffic can be translated to Services within the cluster.

  3. Routes: describe how traffic coming via the Gateway maps to theServices.

At the time of writing, Envoy Gateway only supports a Kubernetes provider. As such, we will consider a reference architecture where multiple teams are working on the same Kubernetes cluster within different namespaces (Tenant A, B, & C). We will assume that some teams have similar security and performance needs, and a decision has been made to use a shared Gateway. However, we will also consider the case that some teams require dedicated Gateways, perhaps for compliance reasons or requirements driven by an internal threat model.

We will consider the following organisational roles, as per the Gateway API security model:

  1. Infrastructure provider: The infrastructure provider (infra) is responsible for the overall environment that the cluster(s) are operating in. Examples include: the cloud provider (AWS, Azure, GCP, …) or the PaaS provider in a company.

  2. Cluster operator: The cluster operator (ops) is responsible for administration of entire clusters. They manage policies, network access, application permissions.

  3. Application developer: The application developer (dev) is responsible for defining their application configuration (e.g. timeouts, request matching/filter) and Service composition (e.g. path routing to backends).

  4. Application admin: The application admin has administrative access to some namespaces within a cluster, but not the cluster as a whole.

Our threat model will be based on the high-level setup shown below, where Envoy is used in an edge-proxy scenario:

Architecture

The following use cases will be considered, in line with the Envoy Gateway tasks:

  1. Routing and controlling traffic, including: a. HTTP
    b. TCP
    c. UDP
    d. gRPC
    e.TLS passthrough
  2. TLS termination
  3. Request Authentication
  4. Rate Limiting

Key Assumptions

This section outlines the foundational premises that shape our analysis and recommendations for the deployment and management of Envoy Gateway within an organisation. The key assumptions are as follows:

1. Kubernetes Provider: For the purposes of this analysis, we assume that a K8s provider will be used to host the cluster.

2. Multi-tenant cluster: In order to produce a broad set of recommendations, it is assumed that within the single cluster, there is:

  • A dedicated cluster operation (ops) team responsible for maintaining the core cluster infrastructure.

  • Multiple application teams who wish to define their own Gateway resources, which will route traffic to their respective applications.

3. Soft multi-tenancy model: It is assumed that co-tenants will have some level of trust between themselves, and will not act in an overtly hostile manner to each other.

4. Ingress Control: It’s assumed that Envoy Gateway is the only ingress controller in the K8s cluster as multiple controllers can lead to complex routing challenges and introduce out-of-scope security vulnerabilities.

5. Container Security: This threat model focuses on evaluating the security of the Envoy Gateway and Envoy Proxy images. All other container images running in tenant clusters, not associated with the edge proxy deployment, are assumed to be secure and obtained from trusted registries such as Docker Hub or Google Container Registry (GCR).

6. Cloud Provider Security: It is assumed that the K8s cluster is running on secure cloud infrastructure provided by a trusted Cloud Service Provider (CSP) such as AWS, GCP, or Azure Cloud.

Data

Data Dictionary

Ultimately, the data of interest in a threat model is the business data processed by the system in question. However, in the case of this threat model, we are looking at a generic deployment architecture involving Envoy Gateway in order to draw out a set of generalised threats which can be considered by teams looking to adopt an implementation of Gateway API. As such, we do not know the business impacts of a compromise of confidentiality, integrity or availability that would typically be captured in a data impact assessment. Instead, will we base our threat assessment on high-level groupings of data structures used in the configuration and operation of the general use cases considered (e.g. HTTP routing, TLS termination, request authentication etc.). We will then assign a confidentiality, integrity and availability impact based on a worst-case scenario of how each compromise could potentially affect business data processed by the generic deployment.

Data Name / TypeNotesConfidentialityIntegrityAvailability
Static Configuration DataStatic configuration data is used to configure Envoy Gateway at startup. This data structure allows for a Provider to be set, which Envoy Gateway calls to establish its runtime configuration, resolve services and persist data. Unauthorised modification of static configuration data could enable the Envoy Gateway admin interface to be configured, logging parameters to be modified, global rate limiting configuration to be misconfigured, or malicious extensions registered for the Envoy Gateway Control Plane. A compromise of confidentiality could potentially give an attacker some useful reconnaissance information. A compromise of the availability of this information at startup time would result in Envoy Gateway starting with default parameters.MediumHighLow
Dynamic Configuration DataDynamic configuration data represents the desired state of the Data Plane, and is defined through Envoy Gateway and Gateway API Kubernetes resources. Unauthorised modification of this data could lead to vulnerabilities in an organisation’s Data Plane infrastructure via misconfiguration of an EnvoyProxy custom resource. Misconfiguration of Gateway API objects such as HTTPRoutes or TLSRoutes could result in traffic being directed to incorrect backends. A compromise of confidentiality could potentially give an attacker some useful reconnaissance information. A compromise of the availability of this information could result in tenant application traffic not being routable until the configuration is recovered and reapplied.MediumHighMedium
TLS Private KeysTLS Private Keys, typically in PEM format, are used to initiate secure connections and encrypt communications. In the context of this threat model, private keys will be associated with the server side of an inbound TLS connection being terminated at a secure gateway configured through Envoy Gateway. Unauthorised exposure could lead to security threats such as person-in-the-middle attacks, whereby the confidentiality or integrity of business data could be compromised. A compromise of integrity may lead to similar consequences if an attacker could insert their own key material. An availability compromise could lead to tenant services being unavailable until new key material is generated and an appropriate CSR submitted.HighHighMedium
TLS CertificatesX.509 certificates represent the binding of a public key (associated with the private key described above) to an identity in a TLS handshake. If an attacker could compromise the integrity of a certificate, they may be able to bind the identity of a TLS termination point to a key pair under their control, enabling person-in-the middle attacks. An availability compromise could lead to tenant services being unavailable until new key material is generated and an appropriate CSR submitted.LowHighMedium
JWKsJWK (JSON Web Key) containing a public key used to validate JWTs for the client authentication use case considered in this threat model. If an attacker could compromise the integrity of a JWK or JSON web key set (JWKS), they could potentially authenticate to a service maliciously. Unavailability of an endpoint exposing JWKs could lead to client requests which require authentication being denied.LowHighMedium
JWTsJWTs, formatted as compact, URL-safe JSON data structures, are utilised for the client authentication use case considered in this threat model. Maintaining their confidentiality and integrity is vital to prevent unauthorised access and ensure correct user identification.HighHighLow
OIDC credentialsIn OIDC authentication scenarios, the application credentials are represented by a client ID and a client secret. A compromise of its confidentiality or integrity could allow malicious actors to impersonate the application, potentially being able to access resources on behalf of the application and request ID tokens on behalf of users. Unavailability of this data would produce a rejection of the requests coming from legitimate users.HighHighMedium
Basic authentiation password hashesIn basic authentication scenarios, passwords are stored as Kubernetes secrets in htpasswd format, where each entry is formed by the username and the hashed password. A compromise of these credentials’ confidentiality and integrity could lead to unauthorised access to the application. Unavailability of these credentials will cause login failures from the application users.HighHighMedium

CIA Impact Assessment

PriorityDescription
Confidentiality
HighCompromise of sensitive client data
MediumInformation leaked which could be useful for attacker reconnaissance
LowNon-sensitive information leakage
Integrity
HighCompromise of source code repositories and gateway deployments
MediumTraffic routing fails due to misconfiguration / invalid configuration
LowNon-critical operation is blocked due to misconfiguration / invalid configuration
Availability
HighLarge scale DoS
MediumTenant application is blocked for a significant period
LowTenant application is blocked for a short period

Data Flow Diagrams

The Data Flow Diagrams (DFDs) below describe the flow of data between the various processes, entities and data stores in a system, as well as the trust boundaries between different user roles and network interfaces. The DFDs are drawn at two different levels, starting at L0 (high-level system view) and increasing in granularity (to L1).

DFD L0

DFD L0

DFD L1

DFD L1

Key Threats and Recommendations

The scope of this threat model led to us categorising threats into priorities of High, Medium or Low; notably in a production implementation some of the threats’ prioritisation may be upgraded or downgraded depending on the business context and data classification.

Risk vs. Threat

For every finding, the risk and threat are stated. Risk defines the potential for negative outcome while threat defines the event that causes the negative outcome.

Threat Categorization

Throughout this threat model, we categorised threats into different areas based on their origin and the segment of the system that they impact. Here’s an overview of each category:

Container Security (CS): These threats are general to containerised applications. Therefore, they are not associated with Envoy Gateway or the Gateway API and could occur in most containerised workloads. They can originate from misconfigurations or vulnerabilities in the orchestrator or the container.

Gateway API (GW): These are threats related to the Gateway API that could affect any of its implementations. Malicious actors could benefit from misconfigurations or excessive permissions on the Gateway API resources (e.g. xRoutes or Gateways) to compromise the confidentiality, integrity, or availability of the application.

Envoy Gateway (EG): These threats are associated with specific configurations or features from Envoy Gateway or Envoy Proxy. If not set properly, these features could be leveraged to gain unauthorised access to protected resources.

Threat Actors

In order to provide a realistic set of threats that is applicable to most organisations, we de-scoped the most advanced and hard to mitigate threat actors as described below:

In Scope Threat Actors

When considering internal threat actors, we chose to follow the security model of the Kubernetes Gateway API.

Internal Attacker
  • Cluster Operator: The cluster operator (ops) is responsible for administration of entire clusters. They manage policies, network access, application permissions.

  • Application Developer: The application developer (dev) is responsible for defining their application configuration (e.g. timeouts, request matching/filter) and Service composition (e.g. path routing to backends).

  • Application Administrator: The application admin has administrative access to some namespaces within a cluster, but not the cluster as a whole.

External Attacker
  • Vandal: Script kiddie, trespasser

  • Motivated Individual: Political activist, thief, terrorist

  • Organised Crime: Syndicates, state-affiliated groups

Out of Scope Threat Actors

External Actors
  • Infrastructure Provider: The infrastructure provider (infra) is responsible for the overall environment that the cluster(s) are operating in. Examples include: the cloud provider, or the PaaS provider in a company.

  • Cloud Service Insider: Employee, external contractor, temporary worker

  • Foreign Intelligence Services (FIS): Nation states

High Priority Findings

EGTM-001 Usage of self-signed certificates

IDUIDCategoryPriority
EGTM-001EGTM-GW-001Gateway APIHigh

Risk: Self-signed certificates (which do not comply with PKI best practices) could lead to unauthorised access to the private key associated with the certificate used for inbound TLS termination at Envoy Proxy, compromising the confidentiality and integrity of proxied traffic.

Threat: Compromise of the private key associated with the certificate used for inbound TLS terminating at Envoy Proxy.

Recommendation: The Envoy Gateway quickstart demonstrates how to set up a Secure Gateway using an example where a self-signed root certificate is created using openssl. As stated in the Envoy Gateway documentation, this is not a suitable configuration for Production usage. It is recommended that PKI best practices are followed, whereby certificates are signed by an Intermediary CA which sits underneath an organisational 'offline' Root CA.

PKI best practices should also apply to the management of client certificates when using mTLS. The Envoy Gateway mTLS task shows how to set up client certificates using self-signed certificates. In the same way as gateway certificates and, as mentioned in the documentation, this configuration should not be used in production environments.

EGTM-002 Private keys are stored as Kubernetes secrets

IDUIDCategoryPriority
EGTM-002EGTM-CS-001Container SecurityHigh

Risk: There is a risk that a threat actor could compromise the Kubernetes secret containing the Envoy private key, allowing the attacker to decrypt Envoy Proxy traffic, compromising the confidentiality of proxied traffic.

Threat: Kubernetes secret containing the Envoy private key is compromised and used to decrypt proxied traffic.

Recommendation: Certificate management best practices mandate short-lived key material where practical, meaning that a mechanism for rotation of private keys and certificates is required, along with a way for certificates to be mounted into Envoy containers. If Kubernetes secrets are used, when a certificate expires, the associated secret must be updated, and Envoy containers must be redeployed. Instead of a manual configuration, it is recommended that cert-manager is used.

EGTM-004 Usage of ClusterRoles with wide permissions

IDUIDCategoryPriority
EGTM-004EGTM-K8-002Container SecurityHigh

Risk: There is a risk that a threat actor could abuse misconfigured RBAC to access the Envoy Gateway ClusterRole (envoy-gateway-role) and use it to expose all secrets across the cluster, thus compromising the confidentiality and integrity of tenant data.

Threat: Compromised Envoy Gateway or misconfigured ClusterRoleBinding (envoy-gateway-rolebinding) to Envoy Gateway ClusterRole (envoy-gateway-role), provides access to resources and secrets in different namespaces.

Recommendation: Users should be aware that Envoy Gateway uses a ClusterRole (envoy-gateway-role) when deployed via the Helm chart, to allow management of Envoy Proxies across different namespaces. This ClusterRole is powerful and includes the ability to read secrets in namespaces which may not be within the purview of Envoy Gateway.

Kubernetes best-practices involve restriction of ClusterRoleBindings, with the use of RoleBindings where possible to limit access per namespace by specifying the namespace in metadata. Namespace isolation reduces the impact of compromise from cluster-scoped roles. Ideally, fine-grained K8s roles should be created per the principle of least privilege to ensure they have the minimum access necessary for role functions.

The pull request #1656 introduced the use of Roles and RoleBindings in namespaced mode. This feature can be leveraged to reduce the amount of permissions required by the Envoy Gateway.

EGTM-007 Misconfiguration of Envoy Gateway dynamic config

IDUIDCategoryPriority
EGTM-007EGTM-EG-002Envoy GatewayHigh

Risk: There is a risk that a threat actor could exploit misconfigured Kubernetes RBAC to create or modify Gateway API resources with no business need, potentially leading to the compromise of the confidentiality, integrity, and availability of resources and traffic within the cluster.

Threat: Unauthorised creation or misconfiguration of Gateway API resources by a threat actor with cluster-scoped access.

Recommendation: Configure the apiGroup and resource fields in RBAC policies to restrict access to Gateway and GatewayClass resources. Enable namespace isolation by using the namespace field, preventing unauthorised access to gateways in other namespaces.

EGTM-009 Co-tenant misconfigures resource across namespaces

IDUIDCategoryPriority
EGTM-009EGTM-GW-002Gateway APIHigh

Risk: There is a risk that a co-tenant misconfigures Gateway or Route resources, compromising the confidentiality, integrity, and availability of routed traffic through Envoy Gateway.

Threat: Malicious or accidental co-tenant misconfiguration of Gateways and Routes associated with other application teams.

Recommendation: Dedicated Envoy Gateways should be provided to each tenant within their respective namespace. A one-to-one relationship should be established between GatewayClass and Gateway resources, meaning that each tenant namespace should have their own GatewayClass watched by a unique Envoy Gateway Controller as defined here in the Deployment Mode documentation.

Application Admins should have write permissions on the Gateway resource, but only in their specific namespaces, and Application Developers should only hold write permissions on Route resources. To enact this access control schema, follow the Write Permissions for Advanced 4 Tier Model described in the Kubernetes Gateway API security model. Examples of secured gateway-route topologies can be found here within the Kubernetes Gateway API docs.

Optionally, consider a GitOps model, where only the GitOps operator has the permission to deploy or modify custom resources in production.

EGTM-014 Malicious image admission

IDUIDCategoryPriority
EGTM-014EGTM-CS-006Container SecurityHigh

Risk: There is a risk that a supply chain attack on Envoy Gateway results in an arbitrary compromise of the confidentiality, integrity or availability of tenant data.

Threat: Supply chain threat actor introduces malicious code into Envoy Gateway or Proxy.

Recommendation: The Envoy Gateway project should continue to work towards conformance with supply-chain security best practices throughout the project lifecycle (for example, as set out in the CNCF Software Supply Chain Best Practices Whitepaper). Adherence to Supply-chain Levels for Software Artefacts (SLSA) standards is crucial for maintaining the security of the system. Employ version control systems to monitor the source and build platforms and assign responsibility to a specific stakeholder.

Integrate a supply chain security tool such as Sigstore, which provides native capabilities for signing and verifying container images and software artefacts. Software Bill of Materials (SBOM), Vulnerability Exploitability eXchange (VEX), and signed artefacts should also be incorporated into the security protocol.

EGTM-020 Out of date or misconfigured Envoy Proxy image

IDUIDCategoryPriority
EGTM-020EGTM-CS-009Container SecurityHigh

Risk: There is a risk that a threat actor exploits an Envoy Proxy vulnerability to remote code execution (RCE) due to out of date or misconfigured Envoy Proxy pod deployment, compromising the confidentiality and integrity of Envoy Proxy along with the availability of the proxy service.

Threat: Deployment of an Envoy Proxy or Gateway image containing exploitable CVEs.

Recommendation: Always use the latest version of the Envoy Proxy image. Regularly check for updates and patch the system as soon as updates become available. Implement a CI/CD pipeline that includes security checks for images and prevents deployment of insecure configurations. A suitable tool should be chosen to provide container vulnerability scanning to mitigate the risk of known vulnerabilities.

Utilise the Pod Security Admission controller to enforce Pod Security Standards and configure the pod security context to limit its capabilities per the principle of least privilege.

EGTM-022 Credentials are stored as Kubernetes Secrets

IDUIDCategoryPriority
EGTM-022EGTM-CS-010Container SecurityHigh

Risk: There is a risk that the OIDC client secret (for OIDC authentication) and user password hashes (for basic authentication) get leaked due to misconfigured RBAC permissions.

Threat: Unauthorised access to the application due to credential leakage.

Recommendation: Ensure that only authorised users and service accounts are able to access secrets. This is especially important in namespaces where SecurityPolicy objects are configured, since those namespaces are the ones to store secrets containing the client secret (in OIDC scenarios) and user password hashes (in basic authentication scenarios).

To do so, minimise the use of ClusterRoles and Roles allowing listing and getting secrets. Perform periodic audits of RBAC permissions.

EGTM-023 Weak Authentication

IDUIDCategoryPriority
EGTM-023EGTM-EG-007Envoy GatewayHigh

Risk: There is a risk of unauthorised access due to the use of basic authentication, which does not enforce any password restriction in terms of complexity and length. In addition, password hashes are stored in SHA1 format, which is a deprecated hashing function.

Threat: Unauthorised access to the application due to weak authentication mechanisms.

Recommendation: It is recommended to make use of stronger authentication mechanisms (i.e. JWT authentication and OIDC authentication) instead of basic authentication. These authentication mechanisms have many advantages, such as the use of short-lived credentials and a central management of security policies through the identity provider.

Medium Priority Findings

EGTM-008 Misconfiguration of Envoy Gateway static config

IDUIDCategoryPriority
EGTM-008EGTM-EG-003Envoy GatewayMedium

Risk: There is a risk of a threat actor misconfiguring static config and compromising the integrity of Envoy Gateway, ultimately leading to the compromised confidentiality, integrity, or availability of tenant data and cluster resources.

Threat: Accidental or deliberate misconfiguration of static configuration leads to a misconfigured deployment of Envoy Gateway, for example logging parameters could be modified or global rate limiting configuration misconfigured.

Recommendation: Implement a GitOps model, utilising Kubernetes' Role-Based Access Control (RBAC) and adhering to the principle of least privilege to minimise human intervention on the cluster. For instance, tools like Flux and ArgoCD can be used for declarative GitOps deployments, ensuring all changes are tracked and reviewed. Additionally, configure your source control management (SCM) system to include mandatory pull request (PR) reviews, commit signing, and protected branches to ensure only authorised changes can be committed to the start-up configuration.

EGTM-010 Weak pod security contexts and policies

IDUIDCategoryPriority
EGTM-010EGTM-CS-005Container SecurityMedium

Risk: There is a risk that a threat actor exploits a weak pod security context, compromising the CIA of a node and the resources / services which run on it.

Threat: Threat Actor who has compromised a pod exploits weak security context to escape to a node, potentially leading to the compromise of Envoy Proxy or Gateway running on the same node.

Recommendation: To mitigate this risk, apply Pod Security Standards at a minimum of Baseline level to all namespaces, especially those containing Envoy Gateway and Proxy Pods. Pod security standards are implemented through K8s Pod Security Admission to provide admission control modes (enforce, audit, and warn) for namespaces. Pod security standards can be enforced by namespace labels as shown here, to enforce a baseline level of pod security to specific namespaces.

Further enhance the security by implementing a sandboxing solution such as gVisor for Envoy Gateway and Proxy Pods to isolate the application from the host kernel. This can be set within the runtimeClassName of the Pod specification.

EGTM-012 ClusterRoles and Roles with permission to deploy ReferenceGrants

IDUIDCategoryPriority
EGTM-012EGTM-GW-004Gateway APIMedium

Risk: There is a risk that a threat actor could abuse excessive RBAC privileges to create ReferenceGrant resources. These resources could then be used to create cross-namespace communication, leading to unauthorised access to the application. This could compromise the confidentiality and integrity of resources and configuration in the affected namespaces and potentially disrupt the availability of services that rely on these object references.

Threat: A ReferenceGrant is created, which validates traffic to cross namespace trust boundaries without a valid business reason, such as a route in one tenant's namespace referencing a backend in another.

Recommendation: Ensure that the ability to create ReferenceGrant resources is restricted to the minimum number of people. Pay special attention to ClusterRoles that allow that action.

EGTM-018 Network Denial of Service (DoS)

IDUIDCategoryPriority
EGTM-018EGTM-GW-006Gateway APIMedium

Risk: There is a risk that malicious requests could lead to a Denial of Service (DoS) attack, thereby reducing API gateway availability due to misconfigurations in rate-limiting or load balancing controls, or a lack of route timeout enforcement.

Threat: Reduced API gateway availability due to an attacker's maliciously crafted request (e.g., QoD) potentially inducing a Denial of Service (DoS) attack.

Recommendation: To ensure high availability and mitigate potential security threats, follow the guidelines in the Envoy Gateway documentation for configuring local rate limit filters, global rate limit filters, and load balancing.

Further, adhere to best practices for configuring Envoy Proxy as an edge proxy documented here within the EnvoyProxy docs. This involves configuring TCP and HTTP proxies with specific settings, including restricting access to the admin endpoint, setting the overload manager and listener / cluster buffer limits, enabling use_remote_address, setting connection and stream timeouts, limiting maximum concurrent streams, setting initial stream window size limit, and configuring action on headers_with_underscores.

Path normalisation should be enabled to minimise path confusion vulnerabilities. These measures help protect against volumetric threats such as Denial of Service (DoS) attacks. Utilise custom resources to implement policy attachment, thereby exposing request limit configuration for route types.

EGTM-019 JWT-based authentication replay attacks

IDUIDCategoryPriority
EGTM-019EGTM-DP-004Container SecurityMedium

Risk: There is a risk that replay attacks using stolen or reused JSON Web Tokens (JWTs) can compromise transmission integrity, thereby undermining the confidentiality and integrity of the data plane.

Threat: Transmission integrity is compromised due to replay attacks using stolen or reused JSON Web Tokens (JWTs).

Recommendation: Comply with JWT best practices for enhanced security, paying special attention to the use of short-lived tokens, which reduce the window of opportunity for a replay attack. The exp claim can be used to set token expiration times.

EGTM-024 Excessive privileges via extension policies

IDUIDCategoryPriority
EGTM-024EGTM-EG-008Envoy GatewayMedium

Risk: There is a risk of developers getting more privileges than required due to the use of SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy and BackendTrafficPolicy. These resources can be attached to a Gateway resource. Therefore, a developer with permission to deploy them would be able to modify a Gateway configuration by targeting the gateway in the policy manifest. This conflicts with the Advanced 4 Tier Model, where developers do not have write permissions on Gateways.

Threat: Excessive developer permissions lead to a misconfiguration and/or unauthorised access.

Recommendation: Considering the Tenant C scenario (represented in the Architecture Diagram), if a developer can create SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy or BackendTrafficPolicy objects in namespace C, they would be able to modify a Gateway configuration by attaching the policy to the gateway. In such scenarios, it is recommended to either:

a. Create a separate namespace, where developers have no permissions, to host tenant C's gateway. Note that, due to design decisions, the SecurityPolicy/EnvoyPatchPolicy/ClientTrafficPolicy/BackendTrafficPolicy object can only target resources deployed in the same namespace. Therefore, having a separate namespace for the gateway would prevent developers from attaching the policy to the gateway.

b. Forbid the creation of these policies for developers in namespace C.

On the other hand, in scenarios similar to tenants A and B, where a shared gateway namespace is in place, this issue is more limited. Note that in this scenario, developers don't have access to the shared gateway namespace.

In addition, it is important to mention that EnvoyPatchPolicy resources can also be attached to GatewayClass resources. This means that, in order to comply with the Advanced 4 Tier model, individuals with the Application Administrator role should not have access to this resource either.

Low Priority Findings

EGTM-003 Misconfiguration leads to insecure TLS settings

IDUIDCategoryPriority
EGTM-003EGTM-EG-001Envoy GatewayLow

Risk: There is a risk that a threat actor could downgrade the security of proxied connections by configuring a weak set of cipher suites, compromising the confidentiality and integrity of proxied traffic.

Threat: Exploit weak cipher suite configuration to downgrade security of proxied connections.

Recommendation: Users operating in highly regulated environments may need to tightly control the TLS protocol and associated cipher suites, blocking non-conforming incoming connections to the gateway.

EnvoyProxy bootstrap config can be customised as per the customise EnvoyProxy documentation. In addition, from v.1.0.0, it is possible to configure common TLS properties for a Gateway or XRoute through the ClientTrafficPolicy object.

EGTM-005 Envoy Gateway Helm chart deployment does not set AppArmor and Seccomp profiles

IDUIDCategoryPriority
EGTM-005EGTM-CP-002Container SecurityLow

Risk: Threat actor who has obtained access to Envoy Gateway pod could exploit the lack of AppArmor and Seccomp profiles in the Envoy Gateway deployment to attempt a container breakout, given the presence of an exploitable vulnerability, potentially impacting the confidentiality and integrity node resources.

Threat: Unauthorised syscalls and malicious code running in the Envoy Gateway pod.

Recommendation: Implement AppArmor policies by setting <container_name>: <profile_ref> within container.apparmor.security.beta.kubernetes.io (note, this config is set per container). Well-defined AppArmor policies may provide greater protection from unknown threats.

Enforce Seccomp profiles by setting the seccompProfile under securityContext. Ideally, a fine-grained profile should be used to restrict access to only necessary syscalls, however the --seccomp-default flag can be set to resort to RuntimeDefault which provides a container runtime specific. Example seccomp profiles can be found here.

To further enhance pod security, consider implementing SELinux via seLinuxOptions for additional syscall attack surface reduction. Setting readOnlyRootFilesystem == true enforces an immutable root filesystem, preventing the addition of malicious binaries to the PATH and increasing the attack cost. Together, these configuration items improve the pods Security Context.

EGTM-006 Envoy Proxy pods deployed with a shell enabled

IDUIDCategoryPriority
EGTM-006EGTM-CS-004Container SecurityLow

Risk: There is a risk that a threat actor exploits a vulnerability in Envoy Proxy to expose a reverse shell, enabling them to compromise the confidentiality, integrity and availability of tenant data via a secondary attack.

Threat: If an external attacker managed to exploit a vulnerability in Envoy, the presence of a shell would be greatly helpful for the attacker in terms of potentially pivoting, escalating, or establishing some form of persistence.

Recommendation: By default, Envoy uses a distroless image since v.0.6.0, which does not ship a shell. Therefore, ensure EnvoyProxy image is up-to-date and patched with the latest stable version.

If using private EnvoyProxy images, use a lightweight EnvoyProxy image without a shell or debugging tool(s) which may be useful for an attacker.

An AuditPolicy (audit.k8s.io/v1beta1) can be configured to record API calls made within your cluster, allowing for identification of malicious traffic and enabling incident response. Requests are recorded based on stages which delineate between the lifecycle stage of the request made (e.g., RequestReceived, ResponseStarted, & ResponseComplete).

EGTM-011 Route Bindings on custom labels

IDUIDCategoryPriority
EGTM-011EGTM-GW-003Gateway APILow

Risk: There is a risk that a gateway owner (or someone with the ability to set namespace labels) maliciously or accidentally binds routes across namespace boundaries, potentially compromising the confidentiality and integrity of traffic in a multitenant scenario.

Threat: If a Route Binding within a Gateway Listener is configured based on a custom label, it could allow a malicious internal actor with the ability to label namespaces to change the set of namespaces supported by the Gateway.

Recommendation: Consider the use of custom admission control to restrict what labels can be set on namespaces through tooling such as Kubewarden, Kyverno, and OPA Gatekeeper. Route binding should follow the Kubernetes Gateway API security model, as shown here, to connect gateways in different namespaces.

EGTM-013 GatewayClass namespace validation is not configured

IDUIDCategoryPriority
EGTM-013EGTM-GW-005Gateway APILow

Risk: There is a risk that an unauthorised actor deploys an unauthorised GatewayClass due to GatewayClass namespace validation not being configured, leading to non-compliance with business and security requirements.

Threat: Unauthorised deployment of Gateway resource via GatewayClass template which crosses namespace trust boundaries.

Recommendation: Leverage GatewayClass namespace validation to limit the namespaces where GatewayClasses can be run through a tool such as OPA Gatekeeper. Reference pull request #24 within gatekeeper-library which outlines how to add GatewayClass namespace validation through a GatewayClassNamespaces API resource kind within the constraints.gatekeeper.sh/v1beta1 apiGroup.

EGTM-015 ServiceAccount token authentication

IDUIDCategoryPriority
EGTM-015EGTM-CS-007Container SecurityLow

Risk: There is a risk that threat actors could exploit ServiceAccount tokens for illegitimate authentication, thereby leading to privilege escalation and the undermining of gateway API resources' integrity, confidentiality, and availability.

Threat: The threat arises from threat actors impersonating the envoy-gateway ServiceAccount through the replay of ServiceAccount tokens, thereby achieving escalated privileges and gaining unauthorised access to Kubernetes resources.

Recommendation: Limit the creation of ServiceAccounts to only when necessary, specifically refraining from using default service account tokens, especially for high-privilege service accounts. For legacy clusters running Kubernetes version 1.21 or earlier, note that ServiceAccount tokens are long-lived by default. To disable the automatic mounting of the service account token, set automountServiceAccountToken: false in the PodSpec.

EGTM-016 Misconfiguration leads to lack of Envoy Proxy access activity visibility

IDUIDCategoryPriority
EGTM-016EGTM-EG-004Envoy GatewayLow

Risk: There is a risk that threat actors establish persistence and move laterally through the cluster unnoticed due to limited visibility into access and application-level activity.

Threat: Threat actors establish persistence and move laterally through the cluster unnoticed.

Recommendation: Configure access logging in the EnvoyProxy. Use ProxyAccessLogFormatType (Text or JSON) to specify the log format and ensure that the logs are sent to the desired sink types by setting the ProxyAccessLogSinkType. Make use of FileEnvoyProxyAccessLog or OpenTelemetryEnvoyProxyAccessLog to configure File and OpenTelemetry sinks, respectively. If the settings aren't defined, the default format is sent to stdout.

Additionally, consider leveraging a central logging mechanism such as Fluentd to enhance visibility into access activity and enable effective incident response (IR).

EGTM-017 Misconfiguration leads to lack of Envoy Gateway activity visibility

IDUIDCategoryPriority
EGTM-017EGTM-EG-005Envoy GatewayLow

Risk: There is a risk that an insider misconfigures an envoy gateway component and goes unnoticed due to a low-touch logging configuration (via default) which responsible stakeholders are not aptly aware of or have immediate access to.

Threat: The threat emerges from an insider misconfiguring an Envoy Gateway component without detection.

Recommendation: Configure the logging level of the Envoy Gateway using the 'level' field in EnvoyGatewayLogging. Ensure the appropriate logging levels are set for relevant components such as 'gateway-api', 'xds-translator', or 'global-ratelimit'. If left unspecified, the logging level defaults to "info", which may not provide sufficient detail for security monitoring.

Employ a centralised logging mechanism, like Fluentd, to enhance visibility into application-level activity and to enable efficient incident response.

EGTM-021 Exposed Envoy Proxy admin interface

IDUIDCategoryPriority
EGTM-021EGTM-EG-006Envoy GatewayLow

Risk: There is a risk that the admin interface is exposed without valid business reason, increasing the attack surface.

Threat: Exposed admin interfaces give internal attackers the option to affect production traffic in unauthorised ways, and the option to exploit any vulnerabilities which may be present in the admin interface (e.g. by orchestrating malicious GET requests to the admin interface through CSRF, compromising Envoy Proxy global configuration or shutting off the service entirely e.g. /quitquitquit).

Recommendation: The Envoy Proxy admin interface is only exposed to localhost, meaning that it is secure by default. However, due to the risk of misconfiguration, this recommendation is included.

Due to the importance of the admin interface, it is recommended to ensure that Envoy Proxies have not been accidentally misconfigured to expose the admin interface to untrusted networks.

EGTM-025 Envoy Proxy pods deployed running as root user in the container

IDUIDCategoryPriority
EGTM-025EGTM-CS-011Container SecurityLow

Risk: The presence of a vulnerability, be it in the kernel or another system component, when coupled with containers running as root, could enable a threat actor to escape the container, thereby compromising the confidentiality, integrity, or availability of cluster resources

Threat: The Envoy Proxy container’s root-user configuration can be leveraged by an attacker to escalate privileges, execute a container breakout, and traverse across trust boundaries.

Recommendation: By default, Envoy Gateway deployments do not use root users. Nonetheless, in case a custom image or deployment manifest is to be used, make sure Envoy Proxy pods run as a non-root user with a high UID within the container.

Set runAsUser and runAsGroup security context options to specific UIDs (e.g., runAsUser: 1000 & runAsGroup: 3000) to ensure the container operates with the stipulated non-root user and group ID. If using helm chart deployment, define the user and group ID in the values.yaml file or via the command line during helm install / upgrade.

Appendix

In Scope Threat Actor Details

Threat ActorCapabilityPersonal MotivationEnvoy Gateway Attack Samples
Application DeveloperLeverage internal knowledge and personal access to the Envoy Gateway infrastructure to move laterally and transit trust boundariesDisgruntled / personal grievances.

Financial incentives
Misconfigure XRoute resources to expose internal applications.

Misconfigure SecurityPolicy objects, reducing the security posture of an application.
Application AdministratorAbuse privileged status to disrupt operations and tenant cluster services through Envoy Gateway misconfigDisgruntled / personal grievances.

Financial incentives
Create malicious routes to internal applications.

Introduce malicious Envoy Proxy images.

Expose the Envoy Proxy Admin interface.
Cluster OperatorAlter application-level deployments by misconfiguring resource dependencies & SCM to introduce vulnerabilitiesDisgruntled / personal grievances.

Financial incentives.

Notoriety
Deploy malicious resources to expose internal applications.

Access authentication secrets.

Fall victim to phishing attacks and inadvertently share authentication credentials to cloud infrastructure or Kubernetes clusters.
Vandal: Script Kiddie, TrespasserUses publicly available tools and applications (Nmap,Metasploit, CVE PoCs)Curiosity.

Personal fame through defacement / denial of service of prominent public facing web resources
Small scale DOS.

Launches prepackaged exploits, runs crypto mining tools.

Exploit public-facing application services such as the bastion host to gain an initial foothold in the environment
Motivated individual: Political activist, Thief, TerroristWrite tools and exploits required for their means if sufficiently motivated.

Tend to use these in a targeted fashion against specific organisations. May combine publicly available exploits in a targeted fashion. Tamper with open source supply chains
Personal Gain (Political or Ideological)Phishing, DDOS, exploit known vulnerabilities.

Compromise third-party components such as Helm charts and container images to inject malicious codes to propagate access throughout the environment.
Organised crime: syndicates, state-affiliated groupsWrite tools and exploits required for their means.

Tend to use these in a non-targeted fashion, unless motivation is sufficiently high.

Devotes considerable resources, writes exploits, can bribe/coerce, can launch targeted attacks
Ransom.

Mass extraction of PII / credentials / PCI data.

Financial incentives
Social Engineering, phishing, ransomware, coordinated attacks.

Intercept and replay JWT tokens (via MiTM) between tenant user(s) and envoy gateway to modify app configs in-transit

Identified Threats by Priority

IDUIDCategoryRiskThreatPriorityRecommendation
EGTM-001EGTM-GW-001Gateway APISelf-signed certificates (which do not comply with PKI best practices) could lead to unauthorised access to the private key associated with the certificate used for inbound TLS termination at Envoy Proxy, compromising the confidentiality and integrity of proxied traffic.

Compromise of the private key associated with the certificate used for inbound TLS terminating at Envoy Proxy.

HighThe Envoy Gateway quickstart demonstrates how to set up a Secure Gateway using an example where a self-signed root certificate is created using openssl. As stated in the Envoy Gateway documentation, this is not a suitable configuration for Production usage. It is recommended that PKI best practices are followed, whereby certificates are signed by an Intermediary CA which sits underneath an organisational 'offline' Root CA.

PKI best practices should also apply to the management of client certificates when using mTLS. The Envoy Gateway mTLS task shows how to set up client certificates using self-signed certificates. In the same way as gateway certificates and, as mentioned in the documentation, this configuration should not be used in production environments.
EGTM-002EGTM-CS-001Container SecurityThere is a risk that a threat actor could compromise the Kubernetes secret containing the Envoy private key, allowing the attacker to decrypt Envoy Proxy traffic, compromising the confidentiality of proxied traffic.

Kubernetes secret containing the Envoy private key is compromised and used to decrypt proxied traffic.

HighCertificate management best practices mandate short-lived key material where practical, meaning that a mechanism for rotation of private keys and certificates is required, along with a way for certificates to be mounted into Envoy containers. If Kubernetes secrets are used, when a certificate expires, the associated secret must be updated, and Envoy containers must be redeployed. Instead of a manual configuration, it is recommended that cert-manager is used.
EGTM-004EGTM-K8-002Container SecurityThere is a risk that a threat actor could abuse misconfigured RBAC to access the Envoy Gateway ClusterRole (envoy-gateway-role) and use it to expose all secrets across the cluster, thus compromising the confidentiality and integrity of tenant data.

Compromised Envoy Gateway or misconfigured ClusterRoleBinding (envoy-gateway-rolebinding) to Envoy Gateway ClusterRole (envoy-gateway-role), provides access to resources and secrets in different namespaces.

HighUsers should be aware that Envoy Gateway uses a ClusterRole (envoy-gateway-role) when deployed via the Helm chart, to allow management of Envoy Proxies across different namespaces. This ClusterRole is powerful and includes the ability to read secrets in namespaces which may not be within the purview of Envoy Gateway.

Kubernetes best-practices involve restriction of ClusterRoleBindings, with the use of RoleBindings where possible to limit access per namespace by specifying the namespace in metadata. Namespace isolation reduces the impact of compromise from cluster-scoped roles. Ideally, fine-grained K8s roles should be created per the principle of least privilege to ensure they have the minimum access necessary for role functions.

The pull request #1656 introduced the use of Roles and RoleBindings in namespaced mode. This feature can be leveraged to reduce the amount of permissions required by the Envoy Gateway.
EGTM-007EGTM-EG-002Envoy GatewayThere is a risk that a threat actor could exploit misconfigured Kubernetes RBAC to create or modify Gateway API resources with no business need, potentially leading to the compromise of the confidentiality, integrity, and availability of resources and traffic within the cluster.

Unauthorised creation or misconfiguration of Gateway API resources by a threat actor with cluster-scoped access.

HighConfigure the apiGroup and resource fields in RBAC policies to restrict access to Gateway and GatewayClass resources. Enable namespace isolation by using the namespace field, preventing unauthorised access to gateways in other namespaces.
EGTM-009EGTM-GW-002Gateway APIThere is a risk that a co-tenant misconfigures Gateway or Route resources, compromising the confidentiality, integrity, and availability of routed traffic through Envoy Gateway.

Malicious or accidental co-tenant misconfiguration of Gateways and Routes associated with other application teams.

HighDedicated Envoy Gateways should be provided to each tenant within their respective namespace. A one-to-one relationship should be established between GatewayClass and Gateway resources, meaning that each tenant namespace should have their own GatewayClass watched by a unique Envoy Gateway Controller as defined here in the Deployment Mode documentation.

Application Admins should have write permissions on the Gateway resource, but only in their specific namespaces, and Application Developers should only hold write permissions on Route resources. To enact this access control schema, follow the Write Permissions for Advanced 4 Tier Model described in the Kubernetes Gateway API security model. Examples of secured gateway-route topologies can be found here within the Kubernetes Gateway API docs.

Optionally, consider a GitOps model, where only the GitOps operator has the permission to deploy or modify custom resources in production.
EGTM-014EGTM-CS-006Container SecurityThere is a risk that a supply chain attack on Envoy Gateway results in an arbitrary compromise of the confidentiality, integrity or availability of tenant data.

Supply chain threat actor introduces malicious code into Envoy Gateway or Proxy.

HighThe Envoy Gateway project should continue to work towards conformance with supply-chain security best practices throughout the project lifecycle (for example, as set out in the CNCF Software Supply Chain Best Practices Whitepaper. Adherence to Supply-chain Levels for Software Artefacts (SLSA) standards is crucial for maintaining the security of the system. Employ version control systems to monitor the source and build platforms and assign responsibility to a specific stakeholder.

Integrate a supply chain security tool such as Sigstore, which provides native capabilities for signing and verifying container images and software artefacts. Software Bill of Materials (SBOM), Vulnerability Exploitability eXchange (VEX), and signed artefacts should also be incorporated into the security protocol.
EGTM-020EGTM-CS-009Container SecurityThere is a risk that a threat actor exploits an Envoy Proxy vulnerability to remote code execution (RCE) due to out of date or misconfigured Envoy Proxy pod deployment, compromising the confidentiality and integrity of Envoy Proxy along with the availability of the proxy service.

Deployment of an Envoy Proxy or Gateway image containing exploitable CVEs.

HighAlways use the latest version of the Envoy Proxy image. Regularly check for updates and patch the system as soon as updates become available. Implement a CI/CD pipeline that includes security checks for images and prevents deployment of insecure configurations. A tool such as Snyk can be used to provide container vulnerability scanning to mitigate the risk of known vulnerabilities.

Utilise the Pod Security Admission controller to enforce Pod Security Standards and configure the pod security context to limit its capabilities per the principle of least privilege.
EGTM-022EGTM-CS-010Container SecurityThere is a risk that the OIDC client secret (for OIDC authentication) and user password hashes (for basic authentication) get leaked due to misconfigured RBAC permissions.

Unauthorised access to the application due to credential leakage.

HighEnsure that only authorised users and service accounts are able to access secrets. This is especially important in namespaces where SecurityPolicy objects are configured, since those namespaces are the ones to store secrets containing the client secret (in OIDC scenarios) and user password hashes (in basic authentication scenarios).

To do so, minimise the use of ClusterRoles and Roles allowing listing and getting secrets. Perform periodic audits of RBAC permissions.
EGTM-023EGTM-EG-007Envoy GatewayThere is a risk of unauthorised access due to the use of basic authentication, which does not enforce any password restriction in terms of complexity and length. In addition, password hashes are stored in SHA1 format, which is a deprecated hashing function.

Unauthorised access to the application due to weak authentication mechanisms.

HighIt is recommended to make use of stronger authentication mechanisms (i.e. JWT authentication and OIDC authentication) instead of basic authentication. These authentication mechanisms have many advantages, such as the use of short-lived credentials and a central management of security policies through the identity provider.
EGTM-008EGTM-EG-003Envoy GatewayThere is a risk of a threat actor misconfiguring static config and compromising the integrity of Envoy Gateway, ultimately leading to the compromised confidentiality, integrity, or availability of tenant data and cluster resources.

Accidental or deliberate misconfiguration of static configuration leads to a misconfigured deployment of Envoy Gateway, for example logging parameters could be modified or global rate limiting configuration misconfigured.

MediumImplement a GitOps model, utilising Kubernetes' Role-Based Access Control (RBAC) and adhering to the principle of least privilege to minimise human intervention on the cluster. For instance, tools like ArgoCD can be used for declarative GitOps deployments, ensuring all changes are tracked and reviewed. Additionally, configure your source control management (SCM) system to include mandatory pull request (PR) reviews, commit signing, and protected branches to ensure only authorised changes can be committed to the start-up configuration.
EGTM-010EGTM-CS-005Container SecurityThere is a risk that a threat actor exploits a weak pod security context, compromising the CIA of a node and the resources / services which run on it.

Threat Actor who has compromised a pod exploits weak security context to escape to a node, potentially leading to the compromise of Envoy Proxy or Gateway running on the same node.

MediumTo mitigate this risk, apply Pod Security Standards at a minimum of Baseline level to all namespaces, especially those containing Envoy Gateway and Proxy Pods. Pod security standards are implemented through K8s Pod Security Admission to provide admission control modes (enforce, audit, and warn) for namespaces. Pod security standards can be enforced by namespace labels as shown here, to enforce a baseline level of pod security to specific namespaces.

Further enhance the security by implementing a sandboxing solution such as gVisor for Envoy Gateway and Proxy Pods to isolate the application from the host kernel. This can be set within the runtimeClassName of the Pod specification.
EGTM-012EGTM-GW-004Gateway APIThere is a risk that a threat actor could abuse excessive RBAC privileges to create ReferenceGrant resources. These resources could then be used to create cross-namespace communication, leading to unauthorised access to the application. This could compromise the confidentiality and integrity of resources and configuration in the affected namespaces and potentially disrupt the availability of services that rely on these object references.

A ReferenceGrant is created, which validates traffic to cross namespace trust boundaries without a valid business reason, such as a route in one tenant's namespace referencing a backend in another.

MediumEnsure that the ability to create ReferenceGrant resources is restricted to the minimum number of people. Pay special attention to ClusterRoles that allow that action.
EGTM-018EGTM-GW-006Gateway APIThere is a risk that malicious requests could lead to a Denial of Service (DoS) attack, thereby reducing API gateway availability due to misconfigurations in rate-limiting or load balancing controls, or a lack of route timeout enforcement.

Reduced API gateway availability due to an attacker's maliciously crafted request (e.g., QoD) potentially inducing a Denial of Service (DoS) attack.

MediumTo ensure high availability and to mitigate potential security threats, adhere to the Envoy Gateway documentation for the configuration of a rate-limiting filter and load balancing.

Further, adhere to best practices for configuring Envoy Proxy as an edge proxy documented here within the EnvoyProxy docs. This involves configuring TCP and HTTP proxies with specific settings, including restricting access to the admin endpoint, setting the overload manager and listener / cluster buffer limits, enabling use_remote_address, setting connection and stream timeouts, limiting maximum concurrent streams, setting initial stream window size limit, and configuring action on headers_with_underscores.

Path normalisation should be enabled to minimise path confusion vulnerabilities. These measures help protect against volumetric threats such as Denial of Service (DoS)nattacks. Utilise custom resources to implement policy attachment, thereby exposing request limit configuration for route types.
EGTM-019EGTM-DP-004Container SecurityThere is a risk that replay attacks using stolen or reused JSON Web Tokens (JWTs) can compromise transmission integrity, thereby undermining the confidentiality and integrity of the data plane.

Transmission integrity is compromised due to replay attacks using stolen or reused JSON Web Tokens (JWTs).

MediumComply with JWT best practices for enhanced security, paying special attention to the use of short-lived tokens, which reduce the window of opportunity for a replay attack. The exp claim can be used to set token expiration times.
EGTM-024EGTM-EG-008Envoy GatewayThere is a risk of developers getting more privileges than required due to the use of SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy and BackendTrafficPolicy. These resources can be attached to a Gateway resource. Therefore, a developer with permission to deploy them would be able to modify a Gateway configuration by targeting the gateway in the policy manifest. This conflicts with the Advanced 4 Tier Model, where developers do not have write permissions on Gateways.

Excessive developer permissions lead to a misconfiguration and/or unauthorised access.

MediumConsidering the Tenant C scenario (represented in the Architecture Diagram), if a developer can create SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy or BackendTrafficPolicy objects in namespace C, they would be able to modify a Gateway configuration by attaching the policy to the gateway. In such scenarios, it is recommended to either:

a. Create a separate namespace, where developers have no permissions, > to host tenant C's gateway. Note that, due to design decisions, > the > SecurityPolicy/EnvoyPatchPolicy/ClientTrafficPolicy/BackendTrafficPolicy > object can only target resources deployed in the same namespace. > Therefore, having a separate namespace for the gateway would > prevent developers from attaching the policy to the gateway.

b. Forbid the creation of these policies for developers in namespace C.

On the other hand, in scenarios similar to tenants A and B, where a shared gateway namespace is in place, this issue is more limited. Note that in this scenario, developers don't have access to the shared gateway namespace.

In addition, it is important to mention that EnvoyPatchPolicy resources can also be attached to GatewayClass resources. This means that, in order to comply with the Advanced 4 Tier model, individuals with the Application Administrator role should not have access to this resource either.
EGTM-003EGTM-EG-001Envoy GatewayThere is a risk that a threat actor could downgrade the security of proxied connections by configuring a weak set of cipher suites, compromising the confidentiality and integrity of proxied traffic.

Exploit weak cipher suite configuration to downgrade security of proxied connections.

LowUsers operating in highly regulated environments may need to tightly control the TLS protocol and associated cipher suites, blocking non-conforming incoming connections to the gateway.

EnvoyProxy bootstrap config can be customised as per the customise EnvoyProxy documentation. In addition, from v.1.0.0, it is possible to configure common TLS properties for a Gateway or XRoute through the ClientTrafficPolicy object.
EGTM-005EGTM-CP-002Container SecurityThreat actor who has obtained access to Envoy Gateway pod could exploit the lack of AppArmor and Seccomp profiles in the Envoy Gateway deployment to attempt a container breakout, given the presence of an exploitable vulnerability, potentially impacting the confidentiality and integrity of namespace resources.

Unauthorised syscalls and malicious code running in the Envoy Gateway pod.

LowImplement AppArmor policies by setting <container_name>: <profile_ref> within container.apparmor.security.beta.kubernetes.io (note, this config is set per container). Well-defined AppArmor policies may provide greater protection from unknown threats.

Enforce Seccomp profiles by setting the seccompProfile under securityContext. Ideally, a fine-grained profile should be used to restrict access to only necessary syscalls, however the --seccomp-default flag can be set to resort to RuntimeDefault which provides a container runtime specific. Example seccomp profiles can be found here.

To further enhance pod security, consider implementing SELinux via seLinuxOptions for additional syscall attack surface reduction. Setting readOnlyRootFilesystem == true enforces an immutable root filesystem, preventing the addition of malicious binaries to the PATH and increasing the attack cost. Together, these configuration items improve the pods Security Context.
EGTM-006EGTM-CS-004Container SecurityThere is a risk that a threat actor exploits a vulnerability in Envoy Proxy to expose a reverse shell, enabling them to compromise the confidentiality, integrity and availability of tenant data via a secondary attack.

If an external attacker managed to exploit a vulnerability in Envoy, the presence of a shell would be greatly helpful for the attacker in terms of potentially pivoting, escalating, or establishing some form of persistence.

LowBy default, Envoy uses a distroless image since v.0.6.0, which does not ship a shell. Therefore, ensure EnvoyProxy image is up-to-date and patched with the latest stable version.

If using private EnvoyProxy images, use a lightweight EnvoyProxy image without a shell or debugging tool(s) which may be useful for an attacker.

An AuditPolicy (audit.k8s.io/v1beta1) can be configured to record API calls made within your cluster, allowing for identification of malicious traffic and enabling incident response. Requests are recorded based on stages which delineate between the lifecycle stage of the request made (e.g., RequestReceived, ResponseStarted, & ResponseComplete).
EGTM-011EGTM-GW-003Gateway APIThere is a risk that a gateway owner (or someone with the ability to set namespace labels) maliciously or accidentally binds routes across namespace boundaries, potentially compromising the confidentiality and integrity of traffic in a multitenant scenario.

If a Route Binding within a Gateway Listener is configured based on a custom label, it could allow a malicious internal actor with the ability to label namespaces to change the set of namespaces supported by the Gateway

LowConsider the use of custom admission control to restrict what labels can be set on namespaces through tooling such as Kubewarden, Kyverno, and OPA Gatekeeper. Route binding should follow the Kubernetes Gateway API security model, as shown here, to connect gateways in different namespaces.
EGTM-013EGTM-GW-005Gateway APIThere is a risk that an unauthorised actor deploys an unauthorised GatewayClass due to GatewayClass namespace validation not being configured, leading to non-compliance with business and security requirements.

Unauthorised deployment of Gateway resource via GatewayClass template which crosses namespace trust boundaries.

LowLeverage GatewayClass namespace validation to limit the namespaces where GatewayClasses can be run through a tool such as using OPA Gatekeeper. Reference pull request #24 within gatekeeper-library which outlines how to add GatewayClass namespace validation through a GatewayClassNamespaces API resource kind within the constraints.gatekeeper.sh/v1beta1 apiGroup.
EGTM-015EGTM-CS-007Container SecurityThere is a risk that threat actors could exploit ServiceAccount tokens for illegitimate authentication, thereby leading to privilege escalation and the undermining of gateway API resources' integrity, confidentiality, and availability.

The threat arises from threat actors impersonating the envoy-gateway ServiceAccount through the replay of ServiceAccount tokens, thereby achieving escalated privileges and gaining unauthorised access to Kubernetes resources.

LowLimit the creation of ServiceAccounts to only when necessary, specifically refraining from using default service account tokens, especially for high-privilege service accounts. For legacy clusters running Kubernetes version 1.21 or earlier, note that ServiceAccount tokens are long-lived by default. To disable the automatic mounting of the service account token, set automountServiceAccountToken: false in the PodSpec.
EGTM-016EGTM-EG-004Envoy GatewayThere is a risk that threat actors establish persistence and move laterally through the cluster unnoticed due to limited visibility into access and application-level activity.

Threat actors establish persistence and move laterally through the cluster unnoticed.

LowConfigure access logging in the EnvoyProxy. Use ProxyAccessLogFormatType (Text or JSON) to specify the log format and ensure that the logs are sent to the desired sink types by setting the ProxyAccessLogSinkType. Make use of FileEnvoyProxyAccessLog or OpenTelemetryEnvoyProxyAccessLog to configure File and OpenTelemetry sinks, respectively. If the settings aren't defined, the default format is sent to stdout.

Additionally, consider leveraging a central logging mechanism such as Fluentd to enhance visibility into access activity and enable effective incident response (IR).
EGTM-017EGTM-EG-005Envoy GatewayThere is a risk that an insider misconfigures an envoy gateway component and goes unnoticed due to a low-touch logging configuration (via default) which responsible stakeholders are not aptly aware of or have immediate access to.

The threat emerges from an insider misconfiguring an Envoy Gateway component without detection.

LowConfigure the logging level of the Envoy Gateway using the 'level' field in EnvoyGatewayLogging. Ensure the appropriate logging levels are set for relevant components such as 'gateway-api', 'xds-translator', or 'global-ratelimit'. If left unspecified, the logging level defaults to "info", which may not provide sufficient detail for security monitoring.

Employ a centralised logging mechanism, like Fluentd, to enhance visibility into application-level activity and to enable efficient incident response.
EGTM-021EGTM-EG-006Envoy GatewayThere is a risk that the admin interface is exposed without valid business reason, increasing the attack surface.

Exposed admin interfaces give internal attackers the option to affect production traffic in unauthorised ways, and the option to exploit any vulnerabilities which may be present in the admin interface (e.g. by orchestrating malicious GET requests to the admin interface through CSRF, compromising Envoy Proxy global configuration or shutting off the service entirely (e.g., /quitquitquit).

LowThe Envoy Proxy admin interface is only exposed to localhost, meaning that it is secure by default. However, due to the risk of misconfiguration, this recommendation is included.

Due to the importance of the admin interface, it is recommended to ensure that Envoy Proxies have not been accidentally misconfigured to expose the admin interface to untrusted networks.
EGTM-025EGTM-CS-011Container SecurityThe presence of a vulnerability, be it in the kernel or another system component, when coupled with containers running as root, could enable a threat actor to escape the container, thereby compromising the confidentiality, integrity, or availability of cluster resources.The Envoy Proxy container’s root-user configuration can be leveraged by an attacker to escalate privileges, execute a container breakout, and traverse across trust boundaries.LowBy default, Envoy Gateway deployments do not use root users. Nonetheless, in case a custom image or deployment manifest is to be used, make sure Envoy Proxy pods run as a non-root user with a high UID within the container. Set runAsUser and runAsGroup security context options to specific UIDs (e.g., runAsUser: 1000 & runAsGroup: 3000) to ensure the container operates with the stipulated non-root user and group ID. If using helm chart deployment, define the user and group ID in the values.yaml file or via the command line during helm install / upgrade.

Attack Trees

Attack trees offer a methodical way of describing the security of systems, based on varying attack patterns. It’s important to approach the review of attack trees from a top-down perspective. The top node, also known as the root node, symbolises the attacker’s primary objective. This goal is then broken down into subsidiary aims, each reflecting a different strategy to attain the root objective. This deconstruction persists until reaching the lowest level objectives or ’leaf nodes’, which depict attacks that can be directly launched.

It is essential to note that attack trees presented here are speculative paths for potential exploitation. The Envoy Gateway project is in a continuous development cycle, and as the project evolves, new vulnerabilities may be exposed, or additional controls could be introduced. Therefore, the threats illustrated in the attack trees should be perceived as point-in-time reflections of the project’s current state at the time of writing this threat model.

Node ID Schema

Each node in the attack tree is assigned a unique identifier following the AT#-## schema. This allows easy reference to specific nodes in the attack trees throughout the threat model. The first part of the ID (AT#) signifies the attack tree number, while the second part (##) represents the node number within that tree.

Logical Operators

Logical AND/OR operators are used to represent the relationship between parent and child nodes. An AND operator means that all child nodes must be achieved to satisfy the parent node. An OR operator between a parent node and its child nodes means that any of the child nodes can be achieved to satisfy the parent node.

Attack Tree Node Legend

AT Legend

AT0

AT0

AT1

AT1

AT2

AT2

AT3

AT3

AT4

AT4

AT5

AT5

AT6

AT6

3.14 - TLS Passthrough

This task will walk through the steps required to configure TLS Passthrough via Envoy Gateway. Unlike configuring Secure Gateways, where the Gateway terminates the client TLS connection, TLS Passthrough allows the application itself to terminate the TLS connection, while the Gateway routes the requests to the application based on SNI headers.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

TLS Certificates

Generate the certificates and keys used by the Service to terminate client TLS connections. For the application, we’ll deploy a sample echoserver app, with the certificates loaded in the application Pod.

Note: These certificates will not be used by the Gateway, but will remain in the application scope.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt

Create a certificate and a private key for passthrough.example.com:

openssl req -out passthrough.example.com.csr -newkey rsa:2048 -nodes -keyout passthrough.example.com.key -subj "/CN=passthrough.example.com/O=some organization"
openssl x509 -req -sha256 -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in passthrough.example.com.csr -out passthrough.example.com.crt

Store the cert/keys in A Secret:

kubectl create secret tls server-certs --key=passthrough.example.com.key --cert=passthrough.example.com.crt

Deployment

Deploy TLS Passthrough application Deployment, Service and TLSRoute:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/tls-passthrough.yaml

Patch the Gateway from the Quickstart to include a TLS listener that listens on port 6443 and is configured for TLS mode Passthrough:

kubectl patch gateway eg --type=json --patch '
  - op: add
    path: /spec/listeners/-
    value:
      name: tls
      protocol: TLS
      hostname: passthrough.example.com
      port: 6443
      tls:
        mode: Passthrough
   '

Testing

You can also test the same functionality by sending traffic to the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Curl the example app through the Gateway, e.g. Envoy proxy:

curl -v -HHost:passthrough.example.com --resolve "passthrough.example.com:6443:${GATEWAY_HOST}" \
--cacert example.com.crt https://passthrough.example.com:6443/get

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 6043:6443 &

Curl the example app through Envoy proxy:

curl -v --resolve "passthrough.example.com:6043:127.0.0.1" https://passthrough.example.com:6043 \
--cacert passthrough.example.com.crt

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the Secret:

kubectl delete secret/server-certs

Next Steps

Checkout the Developer Guide to get involved in the project.

3.15 - TLS Termination for TCP

This task will walk through the steps required to configure TLS Terminate mode for TCP traffic via Envoy Gateway. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.

Prerequisites

  • OpenSSL to generate TLS assets.

Installation

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

TLS Certificates

Generate the certificates and keys used by the Gateway to terminate client TLS connections.

Create a root certificate and private key to sign certificates:

openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt

Create a certificate and a private key for www.example.com:

openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt

Store the cert/key in a Secret:

kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt

Install the TLS Termination for TCP example resources:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/tls-termination.yaml

Verify the Gateway status:

kubectl get gateway/eg -o yaml

Testing

Get the External IP of the Gateway:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

Query the example app through the Gateway:

curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get

Get the name of the Envoy service created the by the example Gateway:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Port forward to the Envoy service:

kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &

Query the example app through Envoy proxy:

curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get

3.16 - Using cert-manager For TLS Termination

This task shows how to set up cert-manager to automatically create certificates and secrets for use by Envoy Gateway. It will first show how to enable the self-sign issuer, which is useful to test that cert-manager and Envoy Gateway can talk to each other. Then it shows how to use Let’s Encrypt’s staging environment. Changing to the Let’s Encrypt production environment is straight-forward after that.

Prerequisites

  • A Kubernetes cluster and a configured kubectl.
  • The helm command.
  • The curl command or similar for testing HTTPS requests.
  • For the ACME HTTP-01 challenge to work
    • your Gateway must be reachable on the public Internet.
    • the domain name you use (we use www.example.com) must point to the Gateway’s external IP(s).

Installation

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Deploying cert-manager

This is a summary of cert-manager Installation with Helm.

Installing cert-manager is straight-forward, but currently (v1.12) requires setting a feature gate to enable the Gateway API support.

$ helm repo add jetstack https://charts.jetstack.io
$ helm upgrade --install --create-namespace --namespace cert-manager --set installCRDs=true --set featureGates=ExperimentalGatewayAPISupport=true cert-manager jetstack/cert-manager

You should now have cert-manager running with nothing to do:

$ kubectl wait --for=condition=Available deployment -n cert-manager --all
deployment.apps/cert-manager condition met
deployment.apps/cert-manager-cainjector condition met
deployment.apps/cert-manager-webhook condition met

$ kubectl get -n cert-manager deployment
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
cert-manager              1/1     1            1           42m
cert-manager-cainjector   1/1     1            1           42m
cert-manager-webhook      1/1     1            1           42m

A Self-Signing Issuer

cert-manager can have any number of issuer configurations. The simplest issuer type is SelfSigned. It simply takes the certificate request and signs it with the private key it generates for the TLS Secret.

Self-signed certificates don't provide any help in establishing trust between certificates.
However, they are great for initial testing, due to their simplicity.

To install self-signing, run

$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned
spec:
  selfSigned: {}
EOF

Creating a TLS Gateway Listener

We now have to patch the example Gateway to reference cert-manager:

$ kubectl patch gateway/eg --patch '
metadata:
  annotations:
    cert-manager.io/cluster-issuer: selfsigned
    cert-manager.io/common-name: "Hello World!"
spec:
  listeners:
  - name: https
    protocol: HTTPS
    hostname: www.example.com
    port: 443
    tls:
      mode: Terminate
      certificateRefs:
      - kind: Secret
        name: eg-https
' --type=merge

You could instead create a new Gateway serving HTTPS, if you’d prefer. cert-manager doesn’t care, but we’ll keep it all together in this task.

Nowadays, X.509 certificates don’t use the subject Common Name for hostname matching, so you can set it to whatever you want, or leave it empty. The important parts here are

  • the annotation referencing the “selfsigned” ClusterIssuer we created above,
  • the hostname, which is required (but see #6051 for computing it based on attached HTTPRoutes), and
  • the named Secret, which is what cert-manager will create for us.

The annotations are documented in Supported Annotations.

Patching the Gateway makes cert-manager create a self-signed certificate within a few seconds. Eventually, the Gateway becomes Programmed again:

$ kubectl wait --for=condition=Programmed gateway/eg
gateway.gateway.networking.k8s.io/eg condition met

Testing The Gateway

See Testing in Secure Gateways for the general idea.

Since we have a self-signed certificate, curl will by default reject it, requiring the -k flag:

$ curl -kv -HHost:www.example.com https://127.0.0.1/get
...
* Server certificate:
*  subject: CN=Hello World!
...
< HTTP/2 200
...

How cert-manager and Envoy Gateway Interact

This explains cert-manager Concepts in an Envoy Gateway context.

In the interaction between the two, cert-manager does all the heavy lifting. It subscribes to changes to Gateway resources (using the gateway-shim component.) For any Gateway it finds, it looks for any TLS listeners, and the associated tls.certificateRefs. Note that while Gateway API supports multiple refs here, Envoy Gateway only uses one. cert-manager also looks at the hostname of the listener to figure out which hosts the certificate is expected to cover. More than one listener can use the same certificate Secret, which means cert-manager needs to find all listeners using the same Secret before deciding what to do. If the certificatRef points to a valid certificate, given the hostnames found in listeners, cert-manager has nothing to do.

If there is no valid certificate, or it is about to expire, cert-manager’s gateway-shim creates a Certificate resource, or updates the existing one. cert-manager then follows the Certificate Lifecycle. To know how to issue the certificate, an ClusterIssuer is configured, and referenced through annotations on the Gateway resource, which you did above. Once a matching ClusterIssuer is found, that plugin does what needs to be done to acquire a signed certificate.

In the case of the ACME protocol (used by Let’s Encrypt), cert-manager can also use an HTTP Gateway to solve the HTTP-01 challenge type. This is the other side of cert-manager’s Gateway API support: the ACME issuer creates a temporary HTTPRoute, lets the ACME server(s) query it, and deletes it again.

cert-manager then updates the Secret that the Gateway’s listener points to in tls.certificateRefs. Envoy Gateway picks up that the Secret has changed, and reloads the corresponding Envoy Proxy Deployments with the new private key and certificate.

As you can imagine, cert-manager requires quite broad permissions to update Secrets in any namespace, so the security-minded reader may want to look at the RBAC resources the Helm chart creates.

Using the ACME Issuer With Let’s Encrypt and HTTP-01

We will start using the Let’s Encrypt staging environment, to spare their production environment. Our Gateway already contains an HTTP listener, so we will use that for the HTTP-01 challenges.

$ CERT_MANAGER_CONTACT_EMAIL=$(git config user.email)  # Or whatever...
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: "$CERT_MANAGER_CONTACT_EMAIL"
    privateKeySecretRef:
      name: letsencrypt-staging-account-key
    solvers:
    - http01:
        gatewayHTTPRoute:
          parentRefs:
          - kind: Gateway
            name: eg
            namespace: default
EOF

The important parts are

  • using spec.acme with a server URI and contact email address, and
  • referencing our plain HTTP gateway so the challenge HTTPRoute is attached to the right place.

Check the account registration process using the Ready condition:

$ kubectl wait --for=condition=Ready clusterissuer/letsencrypt-staging
$ kubectl describe clusterissuer/letsencrypt-staging
...
Status:
  Acme:
    Uri:                   https://acme-staging-v02.api.letsencrypt.org/acme/acct/123456789
  Conditions:
    Message:               The ACME account was registered with the ACME server
    Reason:                ACMEAccountRegistered
    Status:                True
    Type:                  Ready
...

Now we’re ready to update the Gateway annotation to use this issuer instead:

$ kubectl annotate --overwrite gateway/eg cert-manager.io/cluster-issuer=letsencrypt-staging

The Gateway should be picked up by cert-manager, which will create a new certificate for you, and replace the Secret.

You should see a new CertificateRequest to track:

$ kubectl get certificaterequest
NAME             APPROVED   DENIED   READY   ISSUER                REQUESTOR                                         AGE
eg-https-xxxxx   True                True    selfsigned            system:serviceaccount:cert-manager:cert-manager   42m
eg-https-xxxxx   True                True    letsencrypt-staging   system:serviceaccount:cert-manager:cert-manager   42m

Testing The Gateway

We still require the -k flag, since the Let’s Encrypt staging environment CA is not widely trusted.

$ curl -kv -HHost:www.example.com https://127.0.0.1/get
...
* Server certificate:
*  subject: CN=Hello World!
*  issuer: C=US; O=(STAGING) Let's Encrypt; CN=(STAGING) Ersatz Edamame E1
...
< HTTP/2 200
...

Using The Let’s Encrypt Production Environment

Changing to the production environment is just a matter of replacing the server URI:

$ CERT_MANAGER_CONTACT_EMAIL=$(git config user.email)  # Or whatever...
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory  # Removed "-staging".
    email: "$CERT_MANAGER_CONTACT_EMAIL"
    privateKeySecretRef:
      name: letsencrypt-account-key                         # Removed "-staging".
    solvers:
    - http01:
        gatewayHTTPRoute:
          parentRefs:
          - kind: Gateway
            name: eg
            namespace: default
EOF

And now you can update the Gateway listener to point to letsencrypt instead:

$ kubectl annotate --overwrite gateway/eg cert-manager.io/cluster-issuer=letsencrypt

As before, track it by looking at CertificateRequests.

Testing The Gateway

Once the certificate has been replaced, we should finally be able to get rid of the -k flag:

$ curl -v -HHost:www.example.com --resolve www.example.com:127.0.0.1 https://www.example.com/get
...
* Server certificate:
*  subject: CN=Hello World!
*  issuer: C=US; O=Let's Encrypt; CN=R3
...
< HTTP/2 200
...

Collecting Garbage

You probably want to set the cert-manager.io/revision-history-limit annotation on your Gateway to make cert-manager prune the CertificateRequest history.

cert-manager deletes unused Certificate resources, and they are updated in-place when possible, so there should be no need for cleaning up Certificate resources. The deletion is based on whether a Gateway still holds a tls.certificateRefs that requires the Certificate.

If you remove a TLS listener from a Gateway, you may still have a Secret lingering. cert-manager can clean it up using a flag.

Issuer Namespaces

We have used ClusterIssuer resources in this tutorial. They are not bound to any namespace, and will read annotations from Gateways in any namespace. You could also use Issuer, which is bound to a namespace. This is useful e.g. if you want to use different ACME accounts for different namespaces.

If you change the issuer kind, you also need to change the annotation key from cert-manager.io/clusterissuer to cert-manager.io/issuer.

Using ExternalDNS

The ExternalDNS controller maintains DNS records based on Kubernetes resources. Together with cert-manager, this can be used to fully automate hostname management. It can use various source resources, among them Gateway Routes. Just specify a Gateway Route resource, let ExternalDNS create the domain records, and then cert-manager the TLS certificate.

The tutorial on Gateway API uses kubectl. They also have a Helm chart, which is easier to customize. The only thing relevant to Envoy Gateway is to set the sources:

# values.yaml
sources:
- gateway-httproute
- gateway-grpcroute
- gateway-tcproute
- gateway-tlsroute
- gateway-udproute

Monitoring Progress / Troubleshooting

You can monitor progress in several ways:

The Issuer has a Ready condition (though this is rather boring for the selfSigned type):

$ kubectl get issuer --all-namespaces
NAMESPACE   NAME         READY   AGE
default     selfsigned   True    42m

The Gateway will say when it has an invalid certificate:

$ kubectl describe gateway/eg
...
    Conditions:
      Message:               Secret default/eg-https does not exist.
      Reason:                InvalidCertificateRef
      Status:                False
      Type:                  ResolvedRefs
...
      Message:               Listener is invalid, see other Conditions for details.
      Reason:                Invalid
      Status:                False
      Type:                  Programmed
...
Events:
  Type     Reason     Age    From                       Message
  ----     ------     ----   ----                       -------
  Warning  BadConfig  42m    cert-manager-gateway-shim  Skipped a listener block: spec.listeners[1].hostname: Required value: the hostname cannot be empty

The main question is if cert-manager has picked up on the Gateway. I.e., has it created a Certificate for it? The above describe contains an event from cert-manager-gateway-shim telling you of one such issue. Be aware that if you have a non-TLS listener in the Gateway, like we did, there will be events saying it is not eligible, which is of course expected.

Another option is looking at Deployment logs. The cert-manager logs are not very verbose by default, but setting the Helm value global.logLevel to 6 will enable all debug logs (the default is 2.) This will make them verbose enough to say why a Gateway wasn’t considered (e.g. missing hostname, or tls.mode is not Terminate.)

$ kubectl logs -n cert-manager deployment/cert-manager
...

Simply listing Certificate resources may be useful, even if it just gives a yes/no answer:

$ kubectl get certificate --all-namespaces
NAMESPACE   NAME       READY   SECRET     AGE
default     eg-https   True    eg-https   42m

If there is a Certificate, then the gateway-shim has recognized the Gateway. But is there a CertificateRequest for it? (BTW, don’t confuse this with a CertificateSigningRequest, which is a Kubernetes core resource type representing the same thing.)

$ kubectl get certificaterequest --all-namespaces
NAMESPACE   NAME             APPROVED   DENIED   READY   ISSUER       REQUESTOR                                         AGE
default     eg-https-xxxxx   True                True    selfsigned   system:serviceaccount:cert-manager:cert-manager   42m

The ACME issuer also has Order and Challenge resources to watch:

$ kubectl get order --all-namespaces -o wide
NAME                                                     STATE     ISSUER                REASON   AGE
order.acme.cert-manager.io/envoy-https-xxxxx-123456789   pending   letsencrypt-staging            42m

$ kubectl get challenge --all-namespaces
NAME                                                                    STATE     DOMAIN            AGE
challenge.acme.cert-manager.io/envoy-https-xxxxx-123456789-1234567890   pending   www.example.com   42m

Using kubetctl get -o wide or kubectl describe on the Challenge will explain its state more.

$ kubectl get order --all-namespaces -o wide
NAME                                                     STATE   ISSUER                REASON   AGE
order.acme.cert-manager.io/envoy-https-xxxxx-123456789   valid   letsencrypt-staging            42m

Finally, since cert-manager creates the Secret referenced by the Gateway listener as its last step, we can also look for that:

$ kubectl get secret secret/eg-https
NAME       TYPE                DATA   AGE
eg-https   kubernetes.io/tls   3      42m

Clean Up

  • Uninstall cert-manager: helm uninstall --namespace cert-manager cert-manager
  • Delete the cert-manager namespace: kubectl delete namespace/cert-manager
  • Delete the https listener from gateway/eg.
  • Delete secret/eg-https.

See Also

4 - Extensibility

This section includes Extensibility tasks.

4.1 - Build a Wasm image

Envoy Gateway supports two types of Wasm extensions within the EnvoyExtensionPolicy API: HTTP Wasm Extensions and Image Wasm Extensions. Packaging a Wasm extension as an OCI image is beneficial because it simplifies versioning and distribution for users. Additionally, users can leverage existing image toolchain to build and manage Wasm images.

This document describes how to build OCI images which are consumable by Envoy Gateway.

Wasm Image Formats

There are two types of images that are supported by Envoy Gateway. One is in the Docker format, and another is the standard OCI specification compliant format. Please note that both of them are supported by any OCI registries. You can choose either format depending on your preference, and both types of images are consumable by Envoy Gateway EnvoyExtensionPolicy API.

Build Wasm Docker image

We assume that you have a valid Wasm binary named plugin.wasm. Then you can build a Wasm Docker image with the Docker CLI.

  1. First, we prepare the following Dockerfile:
$ cat Dockerfile
FROM scratch

COPY plugin.wasm ./

Note: you must have exactly one COPY instruction in the Dockerfile in order to end up having only one layer in produced images.

  1. Then, build your image via docker build command
$ docker build . -t my-registry/mywasm:0.1.0
  1. Finally, push the image to your registry via docker push command
$ docker push my-registry/mywasm:0.1.0

Build Wasm OCI image

We assume that you have a valid Wasm binary named plugin.wasm, and you have buildah installed on your machine. Then you can build a Wasm OCI image with the buildah CLI.

  1. First, we create a working container from scratch base image with buildah from command.
$ buildah --name mywasm from scratch
mywasm
  1. Then copy the Wasm binary into that base image by buildah copy command to create the layer.
$ buildah copy mywasm plugin.wasm ./
af82a227630327c24026d7c6d3057c3d5478b14426b74c547df011ca5f23d271

Note: you must execute buildah copy exactly once in order to end up having only one layer in produced images

  1. Now, you can build an OCI image and push it to your registry via buildah commit command
$ buildah commit mywasm docker://my-remote-registry/mywasm:0.1.0

4.2 - Envoy Patch Policy

This task explains the usage of the EnvoyPatchPolicy API. Note: This API is meant for users extremely familiar with Envoy xDS semantics. Also before considering this API for production use cases, please be aware that this API is unstable and the outcome may change across versions. Use at your own risk.

Introduction

The EnvoyPatchPolicy API allows user to modify the output xDS configuration generated by Envoy Gateway intended for EnvoyProxy, using JSON Patch semantics.

Motivation

This API was introduced to allow advanced users to be able to leverage Envoy Proxy functionality not exposed by Envoy Gateway APIs today.

Quickstart

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Enable EnvoyPatchPolicy

  • By default EnvoyPatchPolicy is disabled. Lets enable it in the EnvoyGateway startup configuration

  • The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it using a ConfigMap. In the next step, we will update this resource to enable EnvoyPatchPolicy.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableEnvoyPatchPolicy: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    extensionApis:
      enableEnvoyPatchPolicy: true    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Testing

Customize Response

  • Use EnvoyProxy’s Local Reply Modification feature to return a custom response back to the client when the status code is 404

  • Apply the configuration

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: custom-response-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
      # The listener name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
      name: default/eg/http
      operation:
        op: add
        path: "/default_filter_chain/filters/0/typed_config/local_reply_config"
        value:
          mappers:
          - filter:
              status_code_filter:
                comparison:
                  op: EQ
                  value:
                    default_value: 404
                    runtime_key: key_b
            status_code: 406
            body:
              inline_string: "could not find what you are looking for"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: custom-response-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
      # The listener name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
      name: default/eg/http
      operation:
        op: add
        path: "/default_filter_chain/filters/0/typed_config/local_reply_config"
        value:
          mappers:
          - filter:
              status_code_filter:
                comparison:
                  op: EQ
                  value:
                    default_value: 404
                    runtime_key: key_b
            status_code: 406
            body:
              inline_string: "could not find what you are looking for"

When mergeGateways is enabled, there will be one Envoy deployment for all Gateways in the cluster. Then the EnvoyPatchPolicy should target a specific GatewayClass.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: custom-response-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: GatewayClass
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
      # The listener name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
      name: default/eg/http
      operation:
        op: add
        path: "/default_filter_chain/filters/0/typed_config/local_reply_config"
        value:
          mappers:
          - filter:
              status_code_filter:
                comparison:
                  op: EQ
                  value:
                    default_value: 404
                    runtime_key: key_b
            status_code: 406
            body:
              inline_string: "could not find what you are looking for"
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: custom-response-patch-policy
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: GatewayClass
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.config.listener.v3.Listener"
      # The listener name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
      name: default/eg/http
      operation:
        op: add
        path: "/default_filter_chain/filters/0/typed_config/local_reply_config"
        value:
          mappers:
          - filter:
              status_code_filter:
                comparison:
                  op: EQ
                  value:
                    default_value: 404
                    runtime_key: key_b
            status_code: 406
            body:
              inline_string: "could not find what you are looking for"
  • Edit the HTTPRoute resource from the Quickstart to only match on paths with value /get
kubectl patch httproute backend --type=json --patch '
  - op: add
    path: /spec/rules/0/matches/0/path/value
    value: /get
  '
  • Test it out by specifying a path apart from /get
$ curl --header "Host: www.example.com" http://$GATEWAY_HOST/find
Handling connection for 8888
could not find what you are looking for

Customize VirtualHost by name

  • Use EnvoyProxy’s include_attempt_count_in_response feature to include the attempt count as header in the downstream response.
  • Apply the configuration
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: include-attempts
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.config.route.v3.RouteConfiguration"
      # The RouteConfiguration name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
      name: default/eg/http
      operation:
        op: add
        # Every virtual_host that ends with 'www_example_com' (using RegEx Filter)
        jsonPath: "..virtual_hosts[?match(@.name, '.*www_example_com')]"
        # If the property does not exists, it can not be selected with jsonPath
        # Therefore the new property must be set in path
        path: "include_attempt_count_in_response"
        value: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  name: include-attempts
  namespace: default
spec:
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
  jsonPatches:
    - type: "type.googleapis.com/envoy.config.route.v3.RouteConfiguration"
      # The RouteConfiguration name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
      name: default/eg/http
      operation:
        op: add
        # Every virtual_host that ends with 'www_example_com' (using RegEx Filter)
        jsonPath: "..virtual_hosts[?match(@.name, '.*www_example_com')]"
        # If the property does not exists, it can not be selected with jsonPath
        # Therefore the new property must be set in path
        path: "include_attempt_count_in_response"
        value: true
  • Test it out by looking at the response headers
$ curl -v --header "Host: www.example.com" http://localhost:8888/
...
< x-envoy-attempt-count: 1
...

Debugging

Runtime

  • The Status subresource should have information about the status of the resource. Make sure Accepted=True and Programmed=True conditions are set to ensure that the policy has been applied to Envoy Proxy.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"gateway.envoyproxy.io/v1alpha1","kind":"EnvoyPatchPolicy","metadata":{"annotations":{},"name":"custom-response-patch-policy","namespace":"default"},"spec":{"jsonPatches":[{"name":"default/eg/http","operation":{"op":"add","path":"/default_filter_chain/filters/0/typed_config/local_reply_config","value":{"mappers":[{"body":{"inline_string":"could not find what you are looking for"},"filter":{"status_code_filter":{"comparison":{"op":"EQ","value":{"default_value":404}}}}}]}},"type":"type.googleapis.com/envoy.config.listener.v3.Listener"}],"priority":0,"targetRef":{"group":"gateway.networking.k8s.io","kind":"Gateway","name":"eg","namespace":"default"},"type":"JSONPatch"}}      
  creationTimestamp: "2023-07-31T21:47:53Z"
  generation: 1
  name: custom-response-patch-policy
  namespace: default
  resourceVersion: "10265"
  uid: a35bda6e-a0cc-46d7-a63a-cee765174bc3
spec:
  jsonPatches:
  - name: default/eg/http
    operation:
      op: add
      path: /default_filter_chain/filters/0/typed_config/local_reply_config
      value:
        mappers:
        - body:
            inline_string: could not find what you are looking for
          filter:
            status_code_filter:
              comparison:
                op: EQ
                value:
                  default_value: 404
    type: type.googleapis.com/envoy.config.listener.v3.Listener
  priority: 0
  targetRef:
    group: gateway.networking.k8s.io
    kind: Gateway
    name: eg
  type: JSONPatch
status:
  conditions:
  - lastTransitionTime: "2023-07-31T21:48:19Z"
    message: EnvoyPatchPolicy has been accepted.
    observedGeneration: 1
    reason: Accepted
    status: "True"
    type: Accepted
  - lastTransitionTime: "2023-07-31T21:48:19Z"
    message: successfully applied patches.
    reason: Programmed
    status: "True"
    type: Programmed

Offline

Caveats

This API will always be an unstable API and the same outcome cannot be guaranteed across versions for these reasons

  • The Envoy Proxy API might deprecate and remove API fields
  • Envoy Gateway might alter the xDS translation creating a different xDS output such as changing the name field of resources.

4.3 - Envoy Gateway Extension Server

This task explains how to extend Envoy Gateway using an Extension Server. Envoy Gateway can be configured to call an external server over gRPC with the xDS configuration before it is sent to Envoy Proxy. The external server can modify the provided configuration programmatically using any semantics supported by the xDS API.

Using an extension server allows vendors to add xDS configuration that Envoy Gateway itself doesn’t support with a very high level of control over the generated xDS configuration.

Note: Modifying the xDS configuration generated by Envoy Gateway may break functionality configured by native Envoy Gateway means. Like other cases where the xDS configuration is modified outside of Envoy Gateway’s control, this is risky and should be tested thoroughly, especially when using the same extension server across different Envoy Gateway versions.

Introduction

One of the Envoy Gateway project goals is to “provide a common foundation for vendors to build value-added products without having to re-engineer fundamental interactions”. The Envoy Gateway Extension Server provides a mechanism where Envoy Gateway tracks all provider resources and then calls a set of hooks that allow the generated xDS configuration to be modified before it is sent to Envoy Proxy. See the design documentation for full details.

This task sets up an example extension server that adds the Envoy Proxy Basic Authentication HTTP filter to all the listeners generated by Envoy Gateway. The example extension server includes its own CRD which allows defining username/password pairs that will be accepted by the Envoy Proxy.

Note: Envoy Gateway supports adding Basic Authentication to routes using a SecurityPolicy. See this task for the preferred way to configure Basic Authentication.

Quickstart

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Build and run the example Extension Server

Build and deploy the example extension server in the examples/extension-server folder into the cluster running Envoy Gateway.

  • Build the extension server image

    Note: The provided Makefile builds an image with the name extension-server:latest. You may need to create a different tag for it in order to allow Kubernetes to pull it correctly.

    make image
    
  • Publish the extension server image in your docker repository

    kind load docker-image --name envoy-gateway extension-server:latest
    
    docker tag extension-server:latest $YOUR_DOCKER_REPO
    docker push $YOUR_DOCKER_REPO
    
  • Deploy the extension server in your cluster

    If you are using your own docker image repository, make sure to update the values.yaml with the correct image name and tag.

    helm install -n envoy-gateway-system extension-server ./examples/extension-server/charts/extension-server
    

Configure Envoy Gateway

  • Grant Envoy Gateway’s ServiceAccount permission to access the extension server’s CRD

    kubectl create clusterrole listener-context-example-status-update \
               --verb=update  \
               --resource=ListenerContextExample/status
    
    kubectl create clusterrole listener-context-example-viewer \
               --verb=get,list,watch  \
               --resource=ListenerContextExample
    
    kubectl create clusterrolebinding envoy-gateway-listener-context \
               --clusterrole=listener-context-example-viewer \
               --serviceaccount=envoy-gateway-system:envoy-gateway
    
    kubectl create clusterrolebinding envoy-gateway-listener-context-status \
               --clusterrole=listener-context-example-status-update \
               --serviceaccount=envoy-gateway-system:envoy-gateway
    
  • Configure Envoy Gateway to use the Extension Server

    Add the following fragment to Envoy Gateway’s configuration file:

    extensionManager:
      # Envoy Gateway will watch these resource kinds and use them as extension policies
      # which can be attached to Gateway resources.
      policyResources:
      - group: example.extensions.io
        version: v1alpha1
        kind: ListenerContextExample
      hooks:
        # The type of hooks that should be invoked
        xdsTranslator:
          post:
          - HTTPListener
      service:
        # The service that is hosting the extension server
        fqdn: 
          hostname: extension-server.envoy-gateway-system.svc.cluster.local
          port: 5005
    

    After updating Envoy Gateway’s configuration file, restart Envoy Gateway.

Testing

Get the Gateway’s address:

export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')

The extension server adds the Basic Authentication HTTP filter to all listeners configured by Envoy Gateway. Initially there are no valid user/password combinations available. Accessing the example backend should fail with a 401 status:

$ curl -v --header "Host: www.example.com" "http://${GATEWAY_HOST}/example"
...
> GET /example HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 401 Unauthorized
< www-authenticate: Basic realm="http://www.example.com/example"
< content-length: 58
< content-type: text/plain
< date: Mon, 08 Jul 2024 10:53:11 GMT
< 
...
User authentication failed. Missing username and password.
...

Add a new Username/Password combination using the example extension server’s CRD:

kubectl apply -f - << EOF 
apiVersion: example.extensions.io/v1alpha1
kind: ListenerContextExample
metadata:
  name: listeneruser
spec:
  targetRefs:
  - kind: Gateway
    name: eg
    group: gateway.networking.k8s.io
  username: user
  password: p@ssw0rd
EOF

Authenticating with this user/password combination will now work.

$ curl -v http://${GATEWAY_HOST}/example  -H "Host: www.example.com"   --user 'user:p@ssw0rd'
...
> GET /example HTTP/1.1
> Host: www.example.com
> Authorization: Basic dXNlcm5hbWU6cEBzc3cwcmQ=
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Mon, 08 Jul 2024 10:56:17 GMT
< content-length: 559
< 
...
 "headers": {
  "Authorization": [
   "Basic dXNlcm5hbWU6cEBzc3cwcmQ="
  ],
  "X-Example-Ext": [
   "user"
  ],
...

4.4 - External Processing

This task provides instructions for configuring external processing.

External processing calls an external gRPC service to process HTTP requests and responses. The external processing service can inspect and mutate requests and responses.

Envoy Gateway introduces a new CRD called EnvoyExtensionPolicy that allows the user to configure external processing. This instantiated resource can be linked to a Gateway and HTTPRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

GRPC External Processing Service

Installation

Install a demo GRPC service that will be used as the external processing service:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-proc-grpc-service.yaml

Create a new HTTPRoute resource to route traffic on the path /myapp to the backend service.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000   
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: myapp
spec:
  parentRefs:
  - name: eg
  hostnames:
  - "www.example.com"
  rules:
  - matches:
    - path:
        type: PathPrefix
        value: /myapp
    backendRefs:
    - name: backend
      port: 3000   

Verify the HTTPRoute status:

kubectl get httproute/myapp -o yaml

Configuration

Create a new EnvoyExtensionPolicy resource to configure the external processing service. This EnvoyExtensionPolicy targets the HTTPRoute “myApp” created in the previous step. It calls the GRPC external processing service “grpc-ext-proc” on port 9002 for processing.

By default, requests and responses are not sent to the external processor. The processingMode struct is used to define what should be sent to the external processor. In this example, we configure the following processing modes:

  • The empty request field configures envoy to send request headers to the external processor.
  • The response field includes configuration for body processing. As a result, response headers are sent to the external processor. Additionally, the response body is streamed to the external processor.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
metadata:
  name: ext-proc-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  extProc:
  - backendRefs:
    - name: grpc-ext-proc
      port: 9002
    processingMode:
      request: {}
      response: 
        body: Streamed 
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
metadata:
  name: ext-proc-example
spec:
  targetRefs:
    - group: gateway.networking.k8s.io
      kind: HTTPRoute
      name: myapp
  extProc:
    - backendRefs:
        - name: grpc-ext-proc
          port: 9002
      processingMode:
        request: {}
        response: 
          body: Streamed

Verify the Envoy Extension Policy configuration:

kubectl get envoyextensionpolicy/ext-proc-example -o yaml

Because the gRPC external processing service is enabled with TLS, a BackendTLSPolicy needs to be created to configure the communication between the Envoy proxy and the gRPC auth service.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: grpc-ext-proc-btls
spec:
  targetRefs:
  - group: ''
    kind: Service
    name: grpc-ext-proc
    sectionName: "9002"
  validation:
    caCertificateRefs:
    - name: grpc-ext-proc-ca
      group: ''
      kind: ConfigMap
    hostname: grpc-ext-proc.envoygateway
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
  name: grpc-ext-proc-btls
spec:
  targetRefs:
    - group: ''
      kind: Service
      name: grpc-ext-proc
      sectionName: "9002"
  validation:
    caCertificateRefs:
      - name: grpc-ext-proc-ca
        group: ''
        kind: ConfigMap
    hostname: grpc-ext-proc.envoygateway

Verify the BackendTLSPolicy configuration:

kubectl get backendtlspolicy/grpc-ext-proc-btls -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Send a request to the backend service without Authentication header:

curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/myapp"

You should see that the external processor added headers:

  • x-request-ext-processed - this header was added before the request was forwarded to the backend
  • x-response-ext-processed- this header was added before the response was returned to the client
curl -v -H "Host: www.example.com"  http://localhost:10080/myapp
[...]
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 14 Jun 2024 19:30:40 GMT
< content-length: 502
< x-response-ext-processed: true
<
{
 "path": "/myapp",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
[...] 
  "X-Request-Ext-Processed": [
   "true"
  ],
[...]
 }

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the demo auth services, HTTPRoute, EnvoyExtensionPolicy and BackendTLSPolicy:

kubectl delete -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-proc-grpc-service.yaml
kubectl delete httproute/myapp
kubectl delete envoyextensionpolicy/ext-proc-example
kubectl delete backendtlspolicy/grpc-ext-proc-btls

Next Steps

Checkout the Developer Guide to get involved in the project.

4.5 - Wasm Extensions

This task provides instructions for extending Envoy Gateway with WebAssembly (Wasm) extensions.

Wasm extensions allow you to extend the functionality of Envoy Gateway by running custom code against HTTP requests and responses, without modifying the Envoy Gateway binary. These extensions can be written in any language that compiles to Wasm, such as C++, Rust, AssemblyScript, or TinyGo.

Envoy Gateway introduces a new CRD called EnvoyExtensionPolicy that allows the user to configure Wasm extensions. This instantiated resource can be linked to a Gateway and HTTPRoute resource.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Configuration

Envoy Gateway supports two types of Wasm extensions:

  • HTTP Wasm Extension: The Wasm extension is fetched from a remote URL.
  • Image Wasm Extension: The Wasm extension is packaged as an OCI image and fetched from an image registry.

The following example demonstrates how to configure an EnvoyExtensionPolicy to attach a Wasm extension to an EnvoyExtensionPolicy . This Wasm extension adds a custom header x-wasm-custom: FOO to the response.

HTTP Wasm Extension

This EnvoyExtensionPolicy configuration fetches the Wasm extension from an HTTP URL.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
metadata:
  name: wasm-test
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  wasm:
  - name: wasm-filter
    rootID: my_root_id
    code:
      type: HTTP
      http:
        url: https://raw.githubusercontent.com/envoyproxy/examples/main/wasm-cc/lib/envoy_filter_http_wasm_example.wasm
        sha256: 79c9f85128bb0177b6511afa85d587224efded376ac0ef76df56595f1e6315c0
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
metadata:
  name: wasm-test
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  wasm:
  - name: wasm-filter
    rootID: my_root_id
    code:
      type: HTTP
      http:
        url: https://raw.githubusercontent.com/envoyproxy/examples/main/wasm-cc/lib/envoy_filter_http_wasm_example.wasm
        sha256: 79c9f85128bb0177b6511afa85d587224efded376ac0ef76df56595f1e6315c0

Verify the EnvoyExtensionPolicy status:

kubectl get envoyextensionpolicy/http-wasm-source-test -o yaml

Image Wasm Extension

This EnvoyExtensionPolicy configuration fetches the Wasm extension from an OCI image.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
metadata:
  name: wasm-test
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  wasm:
  - name: wasm-filter
    rootID: my_root_id
    code:
      type: Image
      image:
        url: zhaohuabing/testwasm:v0.0.1
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyExtensionPolicy
metadata:
  name: wasm-test
spec:
  targetRefs:
  - group: gateway.networking.k8s.io
    kind: HTTPRoute
    name: backend
  wasm:
  - name: wasm-filter
    rootID: my_root_id
    code:
      type: Image
      image:
        url: zhaohuabing/testwasm:v0.0.1

Verify the EnvoyExtensionPolicy status:

kubectl get envoyextensionpolicy/http-wasm-source-test -o yaml

Testing

Ensure the GATEWAY_HOST environment variable from the Quickstart is set. If not, follow the Quickstart instructions to set the variable.

echo $GATEWAY_HOST

Send a request to the backend service:

curl -i -H "Host: www.example.com" "http://${GATEWAY_HOST}"

You should see that the wasm extension has added this header to the response:

x-wasm-custom: FOO

Clean-Up

Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.

Delete the EnvoyExtensionPolicy:

kubectl delete envoyextensionpolicy/wasm-test

Next Steps

Checkout the Developer Guide to get involved in the project.

5 - Observability

This section includes Observability tasks.

5.1 - Gateway API Metrics

Resource metrics for Gateway API objects are available using the Gateway API State Metrics project. The project also provides example dashboard for visualising the metrics using Grafana, and example alerts using Prometheus & Alertmanager.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Run the following commands to install the metrics stack, with the Gateway API State Metrics configuration, on your kubernetes cluster:

kubectl apply --server-side -f https://raw.githubusercontent.com/Kuadrant/gateway-api-state-metrics/main/config/examples/kube-prometheus/bundle_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/gateway-api-state-metrics/main/config/examples/kube-prometheus/bundle.yaml

Metrics and Alerts

To access the Prometheus UI, wait for the statefulset to be ready, then use the port-forward command:

# This first command may fail if the statefulset has not been created yet.
# In that case, try again until you get a message like 'Waiting for 2 pods to be ready...'
# or 'statefulset rolling update complete 2 pods...'
kubectl -n monitoring rollout status --watch --timeout=5m statefulset/prometheus-k8s
kubectl -n monitoring port-forward service/prometheus-k8s 9090:9090 > /dev/null &

Navigate to http://localhost:9090. Metrics can be queried from the ‘Graph’ tab e.g. gatewayapi_gateway_created See the Gateway API State Metrics README for the full list of Gateway API metrics available.

Alerts can be seen in the ‘Alerts’ tab. Gateway API specific alerts will be grouped under the ‘gateway-api.rules’ heading.

Note: Alerts are defined in a PrometheusRules custom resource in the ‘monitoring’ namespace. You can modify the alert rules by updating this resource.

Dashboards

To view the dashboards in Grafana, wait for the deployment to be ready, then use the port-forward command:

kubectl -n monitoring wait --timeout=5m deployment/grafana --for=condition=Available
kubectl -n monitoring port-forward service/grafana 3000:3000 > /dev/null &

Navigate to http://localhost:3000 and sign in with admin/admin. The Gateway API State dashboards will be available in the ‘Default’ folder and tagged with ‘gateway-api’. See the Gateway API State Metrics README for further information on available dashboards.

Note: Dashboards are loaded from configmaps. You can modify the dashboards in the Grafana UI, however you will need to export them from the UI and update the json in the configmaps to persist changes.

5.2 - Gateway Exported Metrics

The Envoy Gateway provides a collection of self-monitoring metrics in Prometheus format.

These metrics allow monitoring of the behavior of Envoy Gateway itself (as distinct from that of the EnvoyProxy it managed).

Watching Components

The Resource Provider, xDS Translator and Infra Manager etc. are key components that made up of Envoy Gateway, they all follow the design of Watching Components.

Envoy Gateway collects the following metrics in Watching Components:

NameDescription
watchable_depthCurrent depth of watchable map.
watchable_subscribe_duration_secondsHow long in seconds a subscribed watchable queue is handled.
watchable_subscribe_totalTotal number of subscribed watchable queue.

Each metric includes the runner label to identify the corresponding components, the relationship between label values and components is as follows:

ValueComponents
gateway-apiGateway API Translator
infrastructureInfrastructure Manager
xds-serverxDS Server
xds-translatorxDS Translator
global-ratelimitGlobal RateLimit xDS Translator

Metrics may include one or more additional labels, such as message, status and reason etc.

Status Updater

Envoy Gateway monitors the status updates of various resources (like GatewayClass, Gateway and HTTPRoute etc.) through Status Updater.

Envoy Gateway collects the following metrics in Status Updater:

NameDescription
status_update_totalTotal number of status update by object kind.
status_update_duration_secondsHow long a status update takes to finish.

Each metric includes kind label to identify the corresponding resources.

xDS Server

Envoy Gateway monitors the cache and xDS connection status in xDS Server.

Envoy Gateway collects the following metrics in xDS Server:

NameDescription
xds_snapshot_create_totalTotal number of xds snapshot cache creates.
xds_snapshot_update_totalTotal number of xds snapshot cache updates by node id.
xds_stream_duration_secondsHow long a xds stream takes to finish.
  • For xDS snapshot cache update and xDS stream connection status, each metric includes nodeID label to identify the connection peer.
  • For xDS stream connection status, each metric also includes streamID label to identify the connection stream, and isDeltaStream label to identify the delta connection stream.

Infrastructure Manager

Envoy Gateway monitors the apply (create or update) and delete operations in Infrastructure Manager.

Envoy Gateway collects the following metrics in Infrastructure Manager:

NameDescription
resource_apply_totalTotal number of applied resources.
resource_apply_duration_secondsHow long in seconds a resource be applied successfully.
resource_delete_totalTotal number of deleted resources.
resource_delete_duration_secondsHow long in seconds a resource be deleted successfully.

Each metric includes the kind label to identify the corresponding resources being applied or deleted by Infrastructure Manager.

Metrics may also include name and namespace label to identify the name and namespace of corresponding Infrastructure Manager.

Wasm

Envoy Gateway monitors the status of Wasm remote fetch cache.

NameDescription
wasm_cache_entriesNumber of Wasm remote fetch cache entries.
wasm_cache_lookup_totalTotal number of Wasm remote fetch cache lookups.
wasm_remote_fetch_totalTotal number of Wasm remote fetches and results.

For metric wasm_cache_lookup_total, we are using hit label (boolean) to indicate whether the Wasm cache has been hit.

5.3 - Gateway Observability

Envoy Gateway provides observability for the ControlPlane and the underlying EnvoyProxy instances. This task show you how to config gateway control-plane observability, includes metrics.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Envoy Gateway provides an add-ons Helm Chart, which includes all the needing components for observability. By default, the OpenTelemetry Collector is disabled.

Install the add-ons Helm Chart:

helm install eg-addons oci://docker.io/envoyproxy/gateway-addons-helm --version v0.0.0-latest --set opentelemetry-collector.enabled=true -n monitoring --create-namespace

Metrics

The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it using a ConfigMap. In this section, we will update this resource to enable various ways to retrieve metrics from Envoy Gateway.

Retrieve Prometheus Metrics from Envoy Gateway

By default, prometheus metric is enabled. You can directly retrieve metrics from Envoy Gateway:

export ENVOY_POD_NAME=$(kubectl get pod -n envoy-gateway-system --selector=control-plane=envoy-gateway,app.kubernetes.io/instance=eg -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward pod/$ENVOY_POD_NAME -n envoy-gateway-system 19001:19001

# check metrics 
curl localhost:19001/metrics

The following is an example to disable prometheus metric for Envoy Gateway.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    telemetry:
      metrics:
        prometheus:
          disable: true
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    telemetry:
      metrics:
        prometheus:
          disable: true    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Enable Open Telemetry sink in Envoy Gateway

The following is an example to send metric via Open Telemetry sink to OTEL gRPC Collector.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    telemetry:
      metrics:
        sinks:
          - type: OpenTelemetry
            openTelemetry:
              host: otel-collector.monitoring.svc.cluster.local
              port: 4317
              protocol: grpc
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    telemetry:
      metrics:
        sinks:
          - type: OpenTelemetry
            openTelemetry:
              host: otel-collector.monitoring.svc.cluster.local
              port: 4317
              protocol: grpc    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

Verify OTel-Collector metrics:

export OTEL_POD_NAME=$(kubectl get pod -n monitoring --selector=app.kubernetes.io/name=opentelemetry-collector -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward pod/$OTEL_POD_NAME -n monitoring 19001:19001

# check metrics 
curl localhost:19001/metrics

5.4 - Proxy Access Logs

Envoy Gateway provides observability for the ControlPlane and the underlying EnvoyProxy instances. This task show you how to config proxy access logs.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Envoy Gateway provides an add-ons Helm Chart, which includes all the needing components for observability. By default, the OpenTelemetry Collector is disabled.

Install the add-ons Helm Chart:

helm install eg-addons oci://docker.io/envoyproxy/gateway-addons-helm --version v0.0.0-latest --set opentelemetry-collector.enabled=true -n monitoring --create-namespace

By default, the Service type of loki is ClusterIP, you can change it to LoadBalancer type for further usage:

kubectl patch service loki -n monitoring -p '{"spec": {"type": "LoadBalancer"}}'

Expose endpoints:

LOKI_IP=$(kubectl get svc loki -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Default Access Log

If custom format string is not specified, Envoy Gateway uses the following default format:

{
  "start_time": "%START_TIME%",
  "method": "%REQ(:METHOD)%",
  "x-envoy-origin-path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%",
  "protocol": "%PROTOCOL%",
  "response_code": "%RESPONSE_CODE%",
  "response_flags": "%RESPONSE_FLAGS%",
  "response_code_details": "%RESPONSE_CODE_DETAILS%",
  "connection_termination_details": "%CONNECTION_TERMINATION_DETAILS%",
  "upstream_transport_failure_reason": "%UPSTREAM_TRANSPORT_FAILURE_REASON%",
  "bytes_received": "%BYTES_RECEIVED%",
  "bytes_sent": "%BYTES_SENT%",
  "duration": "%DURATION%",
  "x-envoy-upstream-service-time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%",
  "x-forwarded-for": "%REQ(X-FORWARDED-FOR)%",
  "user-agent": "%REQ(USER-AGENT)%",
  "x-request-id": "%REQ(X-REQUEST-ID)%",
  ":authority": "%REQ(:AUTHORITY)%",
  "upstream_host": "%UPSTREAM_HOST%",
  "upstream_cluster": "%UPSTREAM_CLUSTER%",
  "upstream_local_address": "%UPSTREAM_LOCAL_ADDRESS%",
  "downstream_local_address": "%DOWNSTREAM_LOCAL_ADDRESS%",
  "downstream_remote_address": "%DOWNSTREAM_REMOTE_ADDRESS%",
  "requested_server_name": "%REQUESTED_SERVER_NAME%",
  "route_name": "%ROUTE_NAME%"
}

Note: Envoy Gateway disable envoy headers by default, you can enable it by setting EnableEnvoyHeaders to true in the ClientTrafficPolicy CRD.

Verify logs from loki:

curl -s "http://$LOKI_IP:3100/loki/api/v1/query_range" --data-urlencode "query={job=\"fluentbit\"}" | jq '.data.result[0].values'

Disable Access Log

If you want to disable it, set the telemetry.accesslog.disable to true in the EnvoyProxy CRD.

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: disable-accesslog
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: disable-accesslog
  namespace: envoy-gateway-system
spec:
  telemetry:
    accessLog:
      disable: true
EOF

OpenTelemetry Sink

Envoy Gateway can send logs to OpenTelemetry Sink.

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: otel-access-logging
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: otel-access-logging
  namespace: envoy-gateway-system
spec:
  telemetry:
    accessLog:
      settings:
        - format:
            type: Text
            text: |
              [%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"
          sinks:
            - type: OpenTelemetry
              openTelemetry:
                host: otel-collector.monitoring.svc.cluster.local
                port: 4317
                resources:
                  k8s.cluster.name: "cluster-1"
EOF

Verify logs from loki:

curl -s "http://$LOKI_IP:3100/loki/api/v1/query_range" --data-urlencode "query={exporter=\"OTLP\"}" | jq '.data.result[0].values'

gGRPC Access Log Service(ALS) Sink

Envoy Gateway can send logs to a backend implemented gRPC access log service proto. There’s an example service here, which simply count the log and export to prometheus endpoint.

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/envoy-als.yaml -n monitoring

The following configuration sends logs to the gRPC access log service:

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: als
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: als
  namespace: envoy-gateway-system
spec:
  telemetry:
    accessLog:
      settings:
        - sinks:
            - type: ALS
              als:
                backendRefs:
                  - name: envoy-als
                    namespace: monitoring
                    port: 8080
                type: HTTP
EOF

Verify logs from envoy-als:

curl -s "http://$(kubectl get svc envoy-als -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}'):19001/metrics" | grep log_count

CEL Expressions

Envoy Gateway provides CEL expressions to filter access log .

For example, you can use the expression 'x-envoy-logged' in request.headers to filter logs that contain the x-envoy-logged header.

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: otel-access-logging
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: otel-access-logging
  namespace: envoy-gateway-system
spec:
  telemetry:
    accessLog:
      settings:
        - format:
            type: Text
            text: |
              [%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"
          matches:
            - "'x-envoy-logged' in request.headers"
          sinks:
            - type: OpenTelemetry
              openTelemetry:
                host: otel-collector.monitoring.svc.cluster.local
                port: 4317
                resources:
                  k8s.cluster.name: "cluster-1"
EOF

Verify logs from loki:

curl -s "http://$LOKI_IP:3100/loki/api/v1/query_range" --data-urlencode "query={exporter=\"OTLP\"}" | jq '.data.result[0].values'

Additional Metadata

Envoy Gateway provides additional metadata about the K8s resources that were translated to certain envoy resources. For example, details about the HTTPRoute and GRPCRoute (kind, group, name, namespace and annotations) are available for access log formatter using the METADATA operator. To enrich logs, users can add log operator such as: %METADATA(ROUTE:envoy-gateway:resources)% to their access log format.

Access Log Types

By default, Access Log settings would apply to:

  • All Routes
  • If traffic is not matched by any Route known to Envoy, the Listener would emit the access log instead

Users may wish to customize this behavior:

  • Emit Access Logs by all Listeners for all traffic with specific settings
  • Do not emit Route-oriented access logs when a route is not matched.

To achieve this, users can select if Access Log settings follow the default behavior or apply specifically to Routes or Listeners by specifying the setting’s type.

Note: When users define their own Access Log settings (with or without a type), the default Envoy Gateway file access log is no longer configured. It can be re-enabled explicitly by adding empty settings for the desired components.

In the following example:

  • Route Access logs would use the default Envoy Gateway format and sink
  • Listener Access logs are customized to report transport-level failures and connection attributes
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: otel-access-logging
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: otel-access-logging
  namespace: envoy-gateway-system
spec:
  telemetry:
    accessLog:
      settings:
        - type: Route # re-enable default access log for route
        - type: Listener # configure specific access log for listeners
          format:
            type: Text
            text: |
              [%START_TIME%] %DOWNSTREAM_REMOTE_ADDRESS% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DOWNSTREAM_TRANSPORT_FAILURE_REASON%
          sinks:
            - type: OpenTelemetry
              openTelemetry:
                host: otel-collector.monitoring.svc.cluster.local
                port: 4317
                resources:
                  k8s.cluster.name: "cluster-1"
EOF

5.5 - Proxy Metrics

Envoy Gateway provides observability for the ControlPlane and the underlying EnvoyProxy instances. This task show you how to config proxy metrics.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Envoy Gateway provides an add-ons Helm Chart, which includes all the needing components for observability. By default, the OpenTelemetry Collector is disabled.

Install the add-ons Helm Chart:

helm install eg-addons oci://docker.io/envoyproxy/gateway-addons-helm --version v0.0.0-latest --set opentelemetry-collector.enabled=true -n monitoring --create-namespace

Metrics

By default, Envoy Gateway expose metrics with prometheus endpoint.

Verify metrics:

export ENVOY_POD_NAME=$(kubectl get pod -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward pod/$ENVOY_POD_NAME -n envoy-gateway-system 19001:19001

# check metrics 
curl localhost:19001/stats/prometheus  | grep "default/backend/rule/0"

You can disable metrics by setting the telemetry.metrics.prometheus.disable to true in the EnvoyProxy CRD.

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/metric/disable-prometheus.yaml

Envoy Gateway can send metrics to OpenTelemetry Sink. Send metrics to OTel-Collector:

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/metric/otel-sink.yaml

Verify OTel-Collector metrics:

export OTEL_POD_NAME=$(kubectl get pod -n monitoring --selector=app.kubernetes.io/name=opentelemetry-collector -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward pod/$OTEL_POD_NAME -n monitoring 19001:19001

# check metrics 
curl localhost:19001/metrics  | grep "default/backend/rule/0"

5.6 - Proxy Tracing

Envoy Gateway provides observability for the ControlPlane and the underlying EnvoyProxy instances. This task show you how to config proxy tracing.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Envoy Gateway provides an add-ons Helm Chart, which includes all the needing components for observability. By default, the OpenTelemetry Collector is disabled.

Install the add-ons Helm Chart:

helm install eg-addons oci://docker.io/envoyproxy/gateway-addons-helm --version v0.0.0-latest --set opentelemetry-collector.enabled=true -n monitoring --create-namespace

Expose Tempo endpoints:

TEMPO_IP=$(kubectl get svc tempo -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Traces

By default, Envoy Gateway doesn’t send traces to any sink. You can enable traces by setting the telemetry.tracing in the EnvoyProxy CRD. Currently, Envoy Gateway support OpenTelemetry, Zipkin and Datadog tracer.

Tracing Provider

The following configurations show how to apply proxy with different providers:

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: otel
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: otel
  namespace: envoy-gateway-system
spec:
  telemetry:
    tracing:
      # sample 100% of requests
      samplingRate: 100
      provider:
        backendRefs:
        - name: otel-collector
          namespace: monitoring
          port: 4317
        type: OpenTelemetry
      customTags:
        # This is an example of using a literal as a tag value
        provider:
          type: Literal
          literal:
            value: "otel"
        "k8s.pod.name":
          type: Environment
          environment:
            name: ENVOY_POD_NAME
            defaultValue: "-"
        "k8s.namespace.name":
          type: Environment
          environment:
            name: ENVOY_GATEWAY_NAMESPACE
            defaultValue: "envoy-gateway-system"
        # This is an example of using a header value as a tag value
        header1:
          type: RequestHeader
          requestHeader:
            name: X-Header-1
            defaultValue: "-"
EOF

Verify OpenTelemetry traces from tempo:

curl -s "http://$TEMPO_IP:3100/api/search?tags=component%3Dproxy+provider%3Dotel" | jq .traces
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: zipkin
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: zipkin
  namespace: envoy-gateway-system
spec:
  telemetry:
    tracing:
      # sample 100% of requests
      samplingRate: 100
      provider:
        backendRefs:
        - name: otel-collector
          namespace: monitoring
          port: 9411
        type: Zipkin
        zipkin:
          enable128BitTraceId: true
      customTags:
        # This is an example of using a literal as a tag value
        provider:
          type: Literal
          literal:
            value: "zipkin"
        "k8s.pod.name":
          type: Environment
          environment:
            name: ENVOY_POD_NAME
            defaultValue: "-"
        "k8s.namespace.name":
          type: Environment
          environment:
            name: ENVOY_GATEWAY_NAMESPACE
            defaultValue: "envoy-gateway-system"
        # This is an example of using a header value as a tag value
        header1:
          type: RequestHeader
          requestHeader:
            name: X-Header-1
            defaultValue: "-"
EOF

Verify zipkin traces from tempo:

curl -s "http://$TEMPO_IP:3100/api/search?tags=component%3Dproxy+provider%3Dzipkin" | jq .traces
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: datadog
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: datadog
  namespace: envoy-gateway-system
spec:
  telemetry:
    tracing:
      # sample 100% of requests
      samplingRate: 100
      provider:
        backendRefs:
        - name: datadog-agent
          namespace: monitoring
          port: 8126
        type: Datadog
      customTags:
        # This is an example of using a literal as a tag value
        provider:
          type: Literal
          literal:
            value: "datadog"
        "k8s.pod.name":
          type: Environment
          environment:
            name: ENVOY_POD_NAME
            defaultValue: "-"
        "k8s.namespace.name":
          type: Environment
          environment:
            name: ENVOY_GATEWAY_NAMESPACE
            defaultValue: "envoy-gateway-system"
        # This is an example of using a header value as a tag value
        header1:
          type: RequestHeader
          requestHeader:
            name: X-Header-1
            defaultValue: "-"
EOF

Verify Datadog traces in Datadog APM

Query trace by trace id:

curl -s "http://$TEMPO_IP:3100/api/traces/<trace_id>" | jq

Sampling Rate

Envoy Gateway use 100% sample rate, which means all requests will be traced. This may cause performance issues when traffic is very high, you can adjust the sample rate by setting the telemetry.tracing.samplingRate in the EnvoyProxy CRD.

The following configurations show how to apply proxy with 1% sample rates:

kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: otel
    namespace: envoy-gateway-system
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: otel
  namespace: envoy-gateway-system
spec:
  telemetry:
    tracing:
      # sample 1% of requests
      samplingRate: 1
      provider:
        backendRefs:
        - name: otel-collector
          namespace: monitoring
          port: 4317
        type: OpenTelemetry
      customTags:
        # This is an example of using a literal as a tag value
        provider:
          type: Literal
          literal:
            value: "otel"
        "k8s.pod.name":
          type: Environment
          environment:
            name: ENVOY_POD_NAME
            defaultValue: "-"
        "k8s.namespace.name":
          type: Environment
          environment:
            name: ENVOY_GATEWAY_NAMESPACE
            defaultValue: "envoy-gateway-system"
        # This is an example of using a header value as a tag value
        header1:
          type: RequestHeader
          requestHeader:
            name: X-Header-1
            defaultValue: "-"
EOF

5.7 - RateLimit Observability

Envoy Gateway provides observability for the RateLimit instances. This guide show you how to config RateLimit observability, includes traces.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Envoy Gateway provides an add-ons Helm Chart, which includes all the needing components for observability. By default, the OpenTelemetry Collector is disabled.

Install the add-ons Helm Chart:

helm install eg-addons oci://docker.io/envoyproxy/gateway-addons-helm --version v0.0.0-latest --set opentelemetry-collector.enabled=true -n monitoring --create-namespace

Follow the steps from the Global Rate Limit to install RateLimit.

Traces

By default, the Envoy Gateway does not configure RateLimit to send traces to the OpenTelemetry Sink. You can configure the collector in the rateLimit.telemetry.tracing of the EnvoyGatewayCRD.

RateLimit uses the OpenTelemetry Exporter to export traces to the collector. You can configure a collector that supports the OTLP protocol, which includes but is not limited to: OpenTelemetry Collector, Jaeger, Zipkin, and so on.

Note:

  • By default, the Envoy Gateway configures a 100% sampling rate for RateLimit, which may lead to performance issues.

Assuming the OpenTelemetry Collector is running in the observability namespace, and it has a service named otel-svc, we only want to sample 50% of the trace data. We would configure it as follows:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis-service.default.svc.cluster.local:6379
      telemetry:
        tracing:
          sampleRate: 50
          provider:
            url: otel-svc.observability.svc.cluster.local:4318
EOF

Save and apply the following resource to your cluster:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-gateway-config
  namespace: envoy-gateway-system
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    provider:
      type: Kubernetes
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    rateLimit:
      backend:
        type: Redis
        redis:
          url: redis-service.default.svc.cluster.local:6379
      telemetry:
        tracing:
          sampleRate: 50
          provider:
            url: otel-svc.observability.svc.cluster.local:4318    

After updating the ConfigMap, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting the envoy-gateway deployment.

kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system

5.8 - Visualising metrics using Grafana

Envoy Gateway provides support for exposing Envoy Gateway and Envoy Proxy metrics to a Prometheus instance. This task shows you how to visualise the metrics exposed to Prometheus using Grafana.

Prerequisites

Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Envoy Gateway provides an add-ons Helm Chart, which includes all the needing components for observability. By default, the OpenTelemetry Collector is disabled.

Install the add-ons Helm Chart:

helm install eg-addons oci://docker.io/envoyproxy/gateway-addons-helm --version v0.0.0-latest --set opentelemetry-collector.enabled=true -n monitoring --create-namespace

Follow the steps from the Gateway Observability and Proxy Metrics to enable Prometheus metrics for both Envoy Gateway (Control Plane) and Envoy Proxy (Data Plane).

Expose endpoints:

GRAFANA_IP=$(kubectl get svc grafana -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Connecting Grafana with Prometheus datasource

To visualise metrics from Prometheus, we have to connect Grafana with Prometheus. If you installed Grafana follow the command from prerequisites sections, the Prometheus datasource should be already configured.

You can also add the datasource manually by following the instructions from Grafana Docs.

Accessing Grafana

You can access the Grafana instance by visiting http://{GRAFANA_IP}, derived in prerequisites.

To log in to Grafana, use the credentials admin:admin.

Envoy Gateway has examples of dashboard for you to get started, you can check them out under Dashboards/envoy-gateway.

If you’d like import Grafana dashboards on your own, please refer to Grafana docs for importing dashboards.

Envoy Proxy Global

This dashboard example shows the overall downstream and upstream stats for each Envoy Proxy instance.

Envoy Proxy Global

Envoy Clusters

This dashboard example shows the overall stats for each cluster from Envoy Proxy fleet.

Envoy Clusters

Envoy Gateway Global

This dashboard example shows the overall stats exported by Envoy Gateway fleet.

Envoy Gateway Global: Watching Components

Envoy Gateway Global: Status Updater

Envoy Gateway Global: xDS Server

Envoy Gateway Global: Infrastructure Manager

Resources Monitor

This dashboard example shows the overall resources stats for both Envoy Gateway and Envoy Proxy fleet.

Envoy Gateway Resources

Update Dashboards

All dashboards of Envoy Gateway are maintained under charts/gateway-addons-helm/dashboards, feel free to make contributions.

Grafonnet

Newer dashboards are generated with Jsonnet with the Grafonnet. This is the preferred method for any new dashboards.

You can run make helm-generate.gateway-addons-helm to generate new version of dashboards. All the generated dashboards have a .gen.json suffix.

Legacy Dashboards

Many of our older dashboards are manually created in the UI and exported as JSON and checked in.

These example dashboards cannot be updated in-place by default, if you are trying to make some changes to the older dashboards, you can save them directly as a JSON file and then re-import.

6 - Operations

This section includes Operations tasks.

6.1 - Customize EnvoyProxy

Envoy Gateway provides an EnvoyProxy CRD that can be linked to the ParametersRef in a Gateway and GatewayClass, allowing cluster admins to customize the managed EnvoyProxy Deployment and Service. To learn more about GatewayClass and ParametersRef, please refer to Gateway API documentation.

Prerequisites

Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.

Verify the Gateway status:

kubectl get gateway/eg -o yaml
egctl x status gateway -v

Before you start, you need to add Infrastructure.ParametersRef in Gateway, and refer to EnvoyProxy Config: Note: MergeGateways cannot be set to true in your EnvoyProxy config if attaching to the Gateway.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  infrastructure:
    parametersRef:
      group: gateway.envoyproxy.io
      kind: EnvoyProxy
      name: custom-proxy-config
  listeners:
    - name: http
      protocol: HTTP
      port: 80
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  infrastructure:
    parametersRef:
      group: gateway.envoyproxy.io
      kind: EnvoyProxy
      name: custom-proxy-config
  listeners:
    - name: http
      protocol: HTTP
      port: 80

You can also attach the EnvoyProxy resource to the GatewayClass using the parametersRef field. This configuration is discouraged if you plan on creating multiple Gateways linking to the same GatewayClass and would like different infrastructure configurations for each of them.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: default
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: default

Customize EnvoyProxy Deployment Replicas

You can customize the EnvoyProxy Deployment Replicas via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        replicas: 2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        replicas: 2

After you apply the config, you should see the replicas of envoyproxy changes to 2. And also you can dynamically change the value.

kubectl get deployment -l gateway.envoyproxy.io/owning-gateway-name=eg -n envoy-gateway-system

Customize EnvoyProxy Image

You can customize the EnvoyProxy Image via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          image: envoyproxy/envoy:v1.25-latest
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          image: envoyproxy/envoy:v1.25-latest

After applying the config, you can get the deployment image, and see it has changed.

Customize EnvoyProxy Pod Annotations

You can customize the EnvoyProxy Pod Annotations via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        pod:
          annotations:
            custom1: deploy-annotation1
            custom2: deploy-annotation2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        pod:
          annotations:
            custom1: deploy-annotation1
            custom2: deploy-annotation2

After applying the config, you can get the envoyproxy pods, and see new annotations has been added.

Customize EnvoyProxy Deployment Resources

You can customize the EnvoyProxy Deployment Resources via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          resources:
            requests:
              cpu: 150m
              memory: 640Mi
            limits:
              cpu: 500m
              memory: 1Gi
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          resources:
            requests:
              cpu: 150m
              memory: 640Mi
            limits:
              cpu: 500m
              memory: 1Gi

Customize EnvoyProxy Deployment Env

You can customize the EnvoyProxy Deployment Env via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          env:
          - name: env_a
            value: env_a_value
          - name: env_b
            value: env_b_value
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          env:
          - name: env_a
            value: env_a_value
          - name: env_b
            value: env_b_value

Envoy Gateway has provided two initial env ENVOY_GATEWAY_NAMESPACE and ENVOY_POD_NAME for envoyproxy container.

After applying the config, you can get the envoyproxy deployment, and see resources has been changed.

Customize EnvoyProxy Deployment Volumes or VolumeMounts

You can customize the EnvoyProxy Deployment Volumes or VolumeMounts via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          volumeMounts:
          - mountPath: /certs
            name: certs
            readOnly: true
        pod:
          volumes:
          - name: certs
            secret:
              secretName: envoy-cert
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        container:
          volumeMounts:
          - mountPath: /certs
            name: certs
            readOnly: true
        pod:
          volumes:
          - name: certs
            secret:
              secretName: envoy-cert

After applying the config, you can get the envoyproxy deployment, and see resources has been changed.

Customize EnvoyProxy Service Annotations

You can customize the EnvoyProxy Service Annotations via EnvoyProxy Config like:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        annotations:
          custom1: svc-annotation1
          custom2: svc-annotation2
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        annotations:
          custom1: svc-annotation1
          custom2: svc-annotation2

After applying the config, you can get the envoyproxy service, and see annotations has been added.

Customize EnvoyProxy Bootstrap Config

You can customize the EnvoyProxy bootstrap config via EnvoyProxy Config. There are three ways to customize it:

  • Replace: the whole bootstrap config will be replaced by the config you provided.
  • Merge: the config you provided will be merged into the default bootstrap config.
  • JSONPatch: the list of JSON Patches you provided will be applied to the bootstrap config. JSON Patch is a standard format specified in RFC 6902.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default 
spec:
  bootstrap:
    type: Replace
    value: |
      admin:
        access_log:
        - name: envoy.access_loggers.file
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
            path: /dev/null
        address:
          socket_address:
            address: 127.0.0.1
            port_value: 20000
      dynamic_resources:
        ads_config:
          api_type: DELTA_GRPC
          transport_api_version: V3
          grpc_services:
          - envoy_grpc:
              cluster_name: xds_cluster
          set_node_on_first_message_only: true
        lds_config:
          ads: {}
          resource_api_version: V3
        cds_config:
          ads: {}
          resource_api_version: V3
      static_resources:
        clusters:
        - connect_timeout: 10s
          load_assignment:
            cluster_name: xds_cluster
            endpoints:
            - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: envoy-gateway
                      port_value: 18000
          typed_extension_protocol_options:
            "envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
               "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
               "explicit_http_config":
                 "http2_protocol_options": {}
          name: xds_cluster
          type: STRICT_DNS
          transport_socket:
            name: envoy.transport_sockets.tls
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
              common_tls_context:
                tls_params:
                  tls_maximum_protocol_version: TLSv1_3
                tls_certificate_sds_secret_configs:
                - name: xds_certificate
                  sds_config:
                    path_config_source:
                      path: "/sds/xds-certificate.json"
                    resource_api_version: V3
                validation_context_sds_secret_config:
                  name: xds_trusted_ca
                  sds_config:
                    path_config_source:
                      path: "/sds/xds-trusted-ca.json"
                    resource_api_version: V3
      layered_runtime:
        layers:
        - name: runtime-0
          rtds_layer:
            rtds_config:
              ads: {}
              resource_api_version: V3
            name: runtime-0
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  bootstrap:
    type: Replace
    value: |
      admin:
        access_log:
        - name: envoy.access_loggers.file
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
            path: /dev/null
        address:
          socket_address:
            address: 127.0.0.1
            port_value: 20000
      dynamic_resources:
        ads_config:
          api_type: DELTA_GRPC
          transport_api_version: V3
          grpc_services:
          - envoy_grpc:
              cluster_name: xds_cluster
          set_node_on_first_message_only: true
        lds_config:
          ads: {}
          resource_api_version: V3
        cds_config:
          ads: {}
          resource_api_version: V3
      static_resources:
        clusters:
        - connect_timeout: 10s
          load_assignment:
            cluster_name: xds_cluster
            endpoints:
            - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: envoy-gateway
                      port_value: 18000
          typed_extension_protocol_options:
            "envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
               "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
               "explicit_http_config":
                 "http2_protocol_options": {}
          name: xds_cluster
          type: STRICT_DNS
          transport_socket:
            name: envoy.transport_sockets.tls
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
              common_tls_context:
                tls_params:
                  tls_maximum_protocol_version: TLSv1_3
                tls_certificate_sds_secret_configs:
                - name: xds_certificate
                  sds_config:
                    path_config_source:
                      path: "/sds/xds-certificate.json"
                    resource_api_version: V3
                validation_context_sds_secret_config:
                  name: xds_trusted_ca
                  sds_config:
                    path_config_source:
                      path: "/sds/xds-trusted-ca.json"
                    resource_api_version: V3
      layered_runtime:
        layers:
        - name: runtime-0
          rtds_layer:
            rtds_config:
              ads: {}
              resource_api_version: V3
            name: runtime-0      

Save and apply the following resource to your cluster:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default 
spec:
  bootstrap:
    type: JSONPatch
    jsonPatches:
    - {"op": "add", "path": "/static_resources/clusters/0/dns_lookup_family", "value": "V4_ONLY"}
    - {"op": "replace", "path": "/admin/address/socket_address/port_value", "value": 9901}
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  bootstrap:
    type: JSONPatch
    jsonPatches:
      - {"op": "add", "path": "/static_resources/clusters/0/dns_lookup_family", "value": "V4_ONLY"}
      - {"op": "replace", "path": "/admin/address/socket_address/port_value", "value": 9901}

You can use egctl x translate to get the default xDS Bootstrap configuration used by Envoy Gateway.

After applying the config, the bootstrap config will be overridden by the new config you provided. Any errors in the configuration will be surfaced as status within the GatewayClass resource. You can also validate this configuration using egctl x translate.

Customize EnvoyProxy Horizontal Pod Autoscaler

You can enable Horizontal Pod Autoscaler for EnvoyProxy Deployment. However, before enabling the HPA for EnvoyProxy, please ensure that the metrics-server component is installed in the cluster.

Once confirmed, you can apply it via EnvoyProxy Config as shown below:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default 
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyHpa:
        minReplicas: 2
        maxReplicas: 10
        metrics:
          - resource:
              name: cpu
              target:
                averageUtilization: 60
                type: Utilization
            type: Resource
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default 
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyHpa:
        minReplicas: 2
        maxReplicas: 10
        metrics:
          - resource:
              name: cpu
              target:
                averageUtilization: 60
                type: Utilization
            type: Resource

After applying the config, the EnvoyProxy HPA (Horizontal Pod Autoscaler) is generated. However, upon activating the EnvoyProxy’s HPA, the Envoy Gateway will no longer reference the replicas field specified in the envoyDeployment, as outlined here.

Customize EnvoyProxy Command line options

You can customize the EnvoyProxy Command line options via spec.extraArgs in EnvoyProxy Config. For example, the following configuration will add --disable-extensions arg in order to disable envoy.access_loggers/envoy.access_loggers.wasm extension:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default 
spec:
  extraArgs:
    - --disable-extensions envoy.access_loggers/envoy.access_loggers.wasm 
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default 
spec:
  extraArgs:
    - --disable-extensions envoy.access_loggers/envoy.access_loggers.wasm 

Customize EnvoyProxy with Patches

You can customize the EnvoyProxy using patches.

Patching Deployment for EnvoyProxy

For example, the following configuration will add resource limits to the envoy and the shutdown-manager containers in the envoyproxy deployment:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: eg
  namespace: default 
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        patch:
          type: StrategicMerge
          value:
            spec:
              template:
                spec:
                  containers:
                  - name: envoy
                    resources:
                      limits:
                        cpu: 500m
                        memory: 1024Mi
                  - name: shutdown-manager
                    resources:
                      limits:
                        cpu: 200m
                        memory: 1024Mi
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: eg
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyDeployment:
        patch:
          type: StrategicMerge
          value:
            spec:
              template:
                spec:
                  containers:
                  - name: envoy
                    resources:
                      limits:
                        cpu: 500m
                        memory: 1024Mi
                  - name: shutdown-manager
                    resources:
                      limits:
                        cpu: 200m
                        memory: 1024Mi

After applying the configuration, you will see the change in both containers in the envoyproxy deployment.

Patching Service for EnvoyProxy

For example, the following configuration will add an annotation for the envoyproxy service:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: eg
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        patch:
          type: StrategicMerge
          value:
            metadata:
              annotations:
                custom-annotation: foobar
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: eg
  namespace: default
spec:
  provider:
    type: Kubernetes
    kubernetes:
      envoyService:
        patch:
          type: StrategicMerge
          value:
            metadata:
              annotations:
                custom-annotation: foobar

After applying the configuration, you will see the custom-annotation: foobar has been added to the envoyproxy service.

Customize Filter Order

Under the hood, Envoy Gateway uses a series of Envoy HTTP filters to process HTTP requests and responses, and to apply various policies.

By default, Envoy Gateway applies the following filters in the order shown:

  • envoy.filters.http.fault
  • envoy.filters.http.cors
  • envoy.filters.http.ext_authz
  • envoy.filters.http.basic_authn
  • envoy.filters.http.oauth2
  • envoy.filters.http.jwt_authn
  • envoy.filters.http.ext_proc
  • envoy.filters.http.wasm
  • envoy.filters.http.rbac
  • envoy.filters.http.local_ratelimit
  • envoy.filters.http.ratelimit
  • envoy.filters.http.router

The default order in which these filters are applied is opinionated and may not suit all use cases. To address this, Envoy Gateway allows you to adjust the execution order of these filters with the filterOrder field in the EnvoyProxy resource.

filterOrder is a list of customized filter order configurations. Each configuration can specify a filter name and a filter to place it before or after. These configurations are applied in the order they are listed. If a filter occurs in multiple configurations, the final order is the result of applying all these configurations in order. To avoid conflicts, it is recommended to only specify one configuration per filter.

For example, the following configuration moves the envoy.filters.http.wasm filter before the envoy.filters.http.jwt_authn filter and the envoy.filters.http.cors filter after the envoy.filters.http.basic_authn filter:

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  filterOrder:
    - name: envoy.filters.http.wasm
      before: envoy.filters.http.jwt_authn
    - name: envoy.filters.http.cors
      after: envoy.filters.http.basic_authn
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  filterOrder:
    - name: envoy.filters.http.wasm
      before: envoy.filters.http.jwt_authn
    - name: envoy.filters.http.cors
      after: envoy.filters.http.basic_authn

Customize EnvoyProxy IP Family

You can customize the IP family configuration for EnvoyProxy via the EnvoyProxy Config. This allows the Envoy Proxy fleet to serve external clients over IPv4 as well as IPv6.

The below configuration sets the ipFamily to DualStack to allow ingressing IPv4 as well as IPv6 traffic.

Note: Envoy Gateway relies on the Service spec of the BackendRef resource (linked to xRoutes) to decide which type of IP addresses to use to route to them.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  ipFamily: DualStack
EOF

Save and apply the following resource to your cluster:

---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: default
spec:
  ipFamily: DualStack  # Supports: IPv4, IPv6, or DualStack

After applying the config, the EnvoyProxy deployment will be configured to use the specified IP family. When set to DualStack, both IPv4 and IPv6 networking will be enabled.

Note: Your cluster must support the selected IP family configuration. For DualStack support, ensure your Kubernetes cluster is properly configured for dual-stack networking.

6.2 - Deployment Mode

Deployment modes

One GatewayClass per Envoy Gateway Controller

  • An Envoy Gateway is associated with a single GatewayClass resource under one controller. This is the simplest deployment mode and is suitable for scenarios where each Gateway needs to have its own dedicated set of resources and configurations.

Multiple GatewayClasses per Envoy Gateway Controller

  • An Envoy Gateway is associated with multiple GatewayClass resources under one controller.
  • Support for accepting multiple GatewayClasses was added here.

Separate Envoy Gateway Controllers

If you’ve instantiated multiple GatewayClasses, you can also run separate Envoy Gateway controllers in different namespaces, linking a GatewayClass to each of them for multi-tenancy. Please follow the example Multi-tenancy.

Merged Gateways onto a single EnvoyProxy fleet

By default, each Gateway has its own dedicated set of Envoy Proxy and its configurations. However, for some deployments, it may be more convenient to merge listeners across multiple Gateways and deploy a single Envoy Proxy fleet.

This can help to efficiently utilize the infra resources in the cluster and manage them in a centralized manner, or have a single IP address for all of the listeners. Setting the mergeGateways field in the EnvoyProxy resource linked to GatewayClass will result in merging all Gateway listeners under one GatewayClass resource.

  • The tuple of port, protocol, and hostname must be unique across all Listeners.

Please follow the example Merged gateways deployment.

Supported Modes

Kubernetes

  • The default deployment model is - Envoy Gateway watches for resources such a Service & HTTPRoute in all namespaces and creates managed data plane resources such as EnvoyProxy Deployment in the namespace where Envoy Gateway is running.
  • Envoy Gateway also supports Namespaced deployment mode, you can watch resources in the specific namespaces by assigning EnvoyGateway.provider.kubernetes.watch.namespaces or EnvoyGateway.provider.kubernetes.watch.namespaceSelector and creates managed data plane resources in the namespace where Envoy Gateway is running.
  • Support for alternate deployment modes is being tracked here.

Multi-tenancy

Kubernetes

  • A tenant is a group within an organization (e.g. a team or department) who shares organizational resources. We recommend each tenant deploy their own Envoy Gateway controller in their respective namespace. Below is an example of deploying Envoy Gateway by the marketing and product teams in separate namespaces.

  • Lets deploy Envoy Gateway in the marketing namespace and also watch resources only in this namespace. We are also setting the controller name to a unique string here gateway.envoyproxy.io/marketing-gatewayclass-controller.

helm install \
--set config.envoyGateway.gateway.controllerName=gateway.envoyproxy.io/marketing-gatewayclass-controller \
--set config.envoyGateway.provider.kubernetes.watch.type=Namespaces \
--set config.envoyGateway.provider.kubernetes.watch.namespaces={marketing} \
eg-marketing oci://docker.io/envoyproxy/gateway-helm \
--version v0.0.0-latest -n marketing --create-namespace

Lets create a GatewayClass linked to the marketing team’s Envoy Gateway controller, and as well other resources linked to it, so the backend application operated by this team can be exposed to external clients.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg-marketing
spec:
  controllerName: gateway.envoyproxy.io/marketing-gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
  namespace: marketing
spec:
  gatewayClassName: eg-marketing
  listeners:
    - name: http
      protocol: HTTP
      port: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend
  namespace: marketing
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: marketing
  labels:
    app: backend
    service: backend
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: marketing
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
      version: v1
  template:
    metadata:
      labels:
        app: backend
        version: v1
    spec:
      serviceAccountName: backend
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
  namespace: marketing
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.marketing.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg-marketing
spec:
  controllerName: gateway.envoyproxy.io/marketing-gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
  namespace: marketing
spec:
  gatewayClassName: eg-marketing
  listeners:
    - name: http
      protocol: HTTP
      port: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend
  namespace: marketing
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: marketing
  labels:
    app: backend
    service: backend
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: marketing
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
      version: v1
  template:
    metadata:
      labels:
        app: backend
        version: v1
    spec:
      serviceAccountName: backend
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
  namespace: marketing
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.marketing.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Lets port forward to the generated envoy proxy service in the marketing namespace and send a request to it.

export ENVOY_SERVICE=$(kubectl get svc -n marketing --selector=gateway.envoyproxy.io/owning-gateway-namespace=marketing,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
kubectl -n marketing port-forward service/${ENVOY_SERVICE} 8888:8080 &
curl --verbose --header "Host: www.marketing.example.com" http://localhost:8888/get
*   Trying 127.0.0.1:8888...
* Connected to localhost (127.0.0.1) port 8888 (#0)
> GET /get HTTP/1.1
> Host: www.marketing.example.com
> User-Agent: curl/7.86.0
> Accept: */*
>
Handling connection for 8888
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Thu, 20 Apr 2023 19:19:42 GMT
< content-length: 521
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "www.marketing.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.86.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "10.1.0.157"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "c637977c-458a-48ae-92b3-f8c429849322"
  ]
 },
 "namespace": "marketing",
 "ingress": "",
 "service": "",
 "pod": "backend-74888f465f-bcs8f"
* Connection #0 to host localhost left intact
  • Lets deploy Envoy Gateway in the product namespace and also watch resources only in this namespace.
helm install \
--set config.envoyGateway.gateway.controllerName=gateway.envoyproxy.io/product-gatewayclass-controller \
--set config.envoyGateway.provider.kubernetes.watch.type=Namespaces \
--set config.envoyGateway.provider.kubernetes.watch.namespaces={product} \
eg-product oci://docker.io/envoyproxy/gateway-helm \
--version v0.0.0-latest -n product --create-namespace

Lets create a GatewayClass linked to the product team’s Envoy Gateway controller, and as well other resources linked to it, so the backend application operated by this team can be exposed to external clients.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg-product
spec:
  controllerName: gateway.envoyproxy.io/product-gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
  namespace: product
spec:
  gatewayClassName: eg-product
  listeners:
    - name: http
      protocol: HTTP
      port: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend
  namespace: product
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: product
  labels:
    app: backend
    service: backend
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: product
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
      version: v1
  template:
    metadata:
      labels:
        app: backend
        version: v1
    spec:
      serviceAccountName: backend
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
  namespace: product
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.product.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg-product
spec:
  controllerName: gateway.envoyproxy.io/product-gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
  namespace: product
spec:
  gatewayClassName: eg-product
  listeners:
    - name: http
      protocol: HTTP
      port: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: backend
  namespace: product
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: product
  labels:
    app: backend
    service: backend
spec:
  ports:
    - name: http
      port: 3000
      targetPort: 3000
  selector:
    app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: product
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
      version: v1
  template:
    metadata:
      labels:
        app: backend
        version: v1
    spec:
      serviceAccountName: backend
      containers:
        - image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
          imagePullPolicy: IfNotPresent
          name: backend
          ports:
            - containerPort: 3000
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
  namespace: product
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.product.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Lets port forward to the generated envoy proxy service in the product namespace and send a request to it.

export ENVOY_SERVICE=$(kubectl get svc -n product --selector=gateway.envoyproxy.io/owning-gateway-namespace=product,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
kubectl -n product port-forward service/${ENVOY_SERVICE} 8889:8080 &
curl --verbose --header "Host: www.product.example.com" http://localhost:8889/get
*   Trying 127.0.0.1:8889...
* Connected to localhost (127.0.0.1) port 8889 (#0)
> GET /get HTTP/1.1
> Host: www.product.example.com
> User-Agent: curl/7.86.0
> Accept: */*
>
Handling connection for 8889
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Thu, 20 Apr 2023 19:20:17 GMT
< content-length: 517
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
 "path": "/get",
 "host": "www.product.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/7.86.0"
  ],
  "X-Envoy-Expected-Rq-Timeout-Ms": [
   "15000"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "10.1.0.156"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "39196453-2250-4331-b756-54003b2853c2"
  ]
 },
 "namespace": "product",
 "ingress": "",
 "service": "",
 "pod": "backend-74888f465f-64fjs"
* Connection #0 to host localhost left intact

With the below command you can ensure that you are no able to access the marketing team’s backend exposed using the www.marketing.example.com hostname and the product team’s data plane.

curl --verbose --header "Host: www.marketing.example.com" http://localhost:8889/get
*   Trying 127.0.0.1:8889...
* Connected to localhost (127.0.0.1) port 8889 (#0)
> GET /get HTTP/1.1
> Host: www.marketing.example.com
> User-Agent: curl/7.86.0
> Accept: */*
>
Handling connection for 8889
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< date: Thu, 20 Apr 2023 19:22:13 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host localhost left intact

Merged gateways deployment

In this example, we will deploy GatewayClass

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: merged-eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
  parametersRef:
    group: gateway.envoyproxy.io
    kind: EnvoyProxy
    name: custom-proxy-config
    namespace: envoy-gateway-system

with a referenced EnvoyProxy resource configured to enable merged Gateways deployment mode.

apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
  name: custom-proxy-config
  namespace: envoy-gateway-system
spec:
  mergeGateways: true

Deploy merged-gateways example

Deploy resources on your cluster from the example.

kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/merged-gateways.yaml

Verify that Gateways are deployed and programmed

kubectl get gateways -n default

NAMESPACE   NAME          CLASS       ADDRESS          PROGRAMMED   AGE
default     merged-eg-1   merged-eg   172.18.255.202   True         2m4s
default     merged-eg-2   merged-eg   172.18.255.202   True         2m4s
default     merged-eg-3   merged-eg   172.18.255.202   True         2m4s

Verify that HTTPRoutes are deployed

kubectl get httproute -n default
NAMESPACE   NAME              HOSTNAMES             AGE
default     hostname1-route   ["www.merged1.com"]   2m4s
default     hostname2-route   ["www.merged2.com"]   2m4s
default     hostname3-route   ["www.merged3.com"]   2m4s

If you take a look at the deployed Envoy Proxy service you would notice that all of the Gateway listeners ports are added to that service.

kubectl get service -n envoy-gateway-system
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                        AGE
envoy-gateway                   ClusterIP      10.96.141.4     <none>           18000/TCP,18001/TCP                            6m43s
envoy-gateway-metrics-service   ClusterIP      10.96.113.191   <none>           19001/TCP                                      6m43s
envoy-merged-eg-668ac7ae        LoadBalancer   10.96.48.255    172.18.255.202   8081:30467/TCP,8082:31793/TCP,8080:31153/TCP   3m17s

There should be also one deployment (envoy-merged-eg-668ac7ae-775f9865d-55zhs) for every Gateway and its name should reference the name of the GatewayClass.

kubectl get pods -n envoy-gateway-system
NAME                                       READY   STATUS    RESTARTS       AGE
envoy-gateway-5d998778f6-wr6m9             1/1     Running   0              6m43s
envoy-merged-eg-668ac7ae-775f9865d-55zhs   2/2     Running   0              3m17s

Testing the Configuration

Get the name of the merged gateways Envoy service:

export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gatewayclass=merged-eg -o jsonpath='{.items[0].metadata.name}')

Fetch external IP of the service:

export GATEWAY_HOST=$(kubectl get svc/${ENVOY_SERVICE} -n envoy-gateway-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

In certain environments, the load balancer may be exposed using a hostname, instead of an IP address. If so, replace ip in the above command with hostname.

Curl the route hostname-route2 through Envoy proxy:

curl --header "Host: www.merged2.com" http://$GATEWAY_HOST:8081/example2
{
 "path": "/example2",
 "host": "www.merged2.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.4.0"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "deed2767-a483-4291-9429-0e256ab3a65f"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "merged-backend-64ddb65fd7-ttv5z"
}

Curl the route hostname-route1 through Envoy proxy:

curl --header "Host: www.merged1.com" http://$GATEWAY_HOST:8080/example
{
 "path": "/example",
 "host": "www.merged1.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.4.0"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "20a53440-6327-4c3c-bc8b-8e79e7311043"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "merged-backend-64ddb65fd7-ttv5z"
}

Verify deployment of multiple GatewayClass

Install the GatewayClass, Gateway, HTTPRoute and example app from Quickstart example:

kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/quickstart.yaml -n default

Lets create also and additional Gateway linked to the GatewayClass and backend application from Quickstart example.

cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg-2
  namespace: default
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      protocol: HTTP
      port: 8080
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: eg-2
  namespace: default
spec:
  parentRefs:
    - name: eg-2
  hostnames:
    - "www.quickstart.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

Save and apply the following resources to your cluster:

---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg-2
  namespace: default
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      protocol: HTTP
      port: 8080
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: eg-2
  namespace: default
spec:
  parentRefs:
    - name: eg-2
  hostnames:
    - "www.quickstart.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Verify that Gateways are deployed and programmed

kubectl get gateways -n default
NAME          CLASS       ADDRESS          PROGRAMMED   AGE
eg            eg          172.18.255.203   True         114s
eg-2          eg          172.18.255.204   True         89s
merged-eg-1   merged-eg   172.18.255.202   True         8m33s
merged-eg-2   merged-eg   172.18.255.202   True         8m33s
merged-eg-3   merged-eg   172.18.255.202   True         8m33s

Verify that HTTPRoutes are deployed

kubectl get httproute -n default
NAMESPACE   NAME              HOSTNAMES                        AGE
default     backend           ["www.example.com"]              2m29s
default     eg-2              ["www.quickstart.example.com"]   87s
default     hostname1-route   ["www.merged1.com"]              10m4s
default     hostname2-route   ["www.merged2.com"]              10m4s
default     hostname3-route   ["www.merged3.com"]              10m4s

Verify that services are now deployed separately.

kubectl get service -n envoy-gateway-system
NAME                            TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                        AGE
envoy-default-eg-2-7e515b2f     LoadBalancer   10.96.121.46    172.18.255.204   8080:32705/TCP                                 3m27s
envoy-default-eg-e41e7b31       LoadBalancer   10.96.11.244    172.18.255.203   80:31930/TCP                                   2m26s
envoy-gateway                   ClusterIP      10.96.141.4     <none>           18000/TCP,18001/TCP                            14m25s
envoy-gateway-metrics-service   ClusterIP      10.96.113.191   <none>           19001/TCP                                      14m25s
envoy-merged-eg-668ac7ae        LoadBalancer   10.96.243.32    172.18.255.202   8082:31622/TCP,8080:32262/TCP,8081:32305/TCP   10m59s

There should be two deployments for each of newly deployed Gateway and its name should reference the name of the namespace and the Gateway.

kubectl get pods -n envoy-gateway-system
NAME                                          READY   STATUS    RESTARTS   AGE
envoy-default-eg-2-7e515b2f-8c98fdf88-p6jhg   2/2     Running   0          3m27s
envoy-default-eg-e41e7b31-6f998d85d7-jpvmj    2/2     Running   0          2m26s
envoy-gateway-5d998778f6-wr6m9                1/1     Running   0          14m25s
envoy-merged-eg-668ac7ae-5958f7b7f6-9h9v2     2/2     Running   0          10m59s

Testing the Configuration

Get the name of the merged gateways Envoy service:

export DEFAULT_ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')

Fetch external IP of the service:

export DEFAULT_GATEWAY_HOST=$(kubectl get svc/${DEFAULT_ENVOY_SERVICE} -n envoy-gateway-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

Curl the route Quickstart backend route through Envoy proxy:

curl --header "Host: www.example.com" http://$DEFAULT_GATEWAY_HOST
{
 "path": "/",
 "host": "www.example.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.4.0"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "70a40595-67a1-4776-955b-2dee361baed7"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "backend-96f75bbf-6w67z"
}

Curl the route hostname-route3 through Envoy proxy:

curl --header "Host: www.merged3.com" http://$GATEWAY_HOST:8082/example3
{
 "path": "/example3",
 "host": "www.merged3.com",
 "method": "GET",
 "proto": "HTTP/1.1",
 "headers": {
  "Accept": [
   "*/*"
  ],
  "User-Agent": [
   "curl/8.4.0"
  ],
  "X-Envoy-Internal": [
   "true"
  ],
  "X-Forwarded-For": [
   "172.18.0.2"
  ],
  "X-Forwarded-Proto": [
   "http"
  ],
  "X-Request-Id": [
   "47aeaef3-abb5-481a-ab92-c2ae3d0862d6"
  ]
 },
 "namespace": "default",
 "ingress": "",
 "service": "",
 "pod": "merged-backend-64ddb65fd7-k84gv"
}

6.3 - Standalone Deployment Mode

Envoy Gateway also supports running in standalone mode. In this mode, Envoy Gateway does not need to rely on Kubernetes and can be deployed directly on bare metal or virtual machines.

Currently, Envoy Gateway only support the file provider and the host infrastructure provider combinations.

  • The file provider will configure the Envoy Gateway to get all gateway-api resources from file system.
  • The host infrastructure provider will configure the Envoy Gateway to deploy one Envoy Proxy as a host process.

Quick Start

In this quick-start, we will run Envoy Gateway in standalone mode with the file provider and the host infrastructure provider.

Prerequisites

Create a local directory just for testing:

mkdir -p /tmp/envoy-gateway-test

As we do not provide the Envoy Gateway binary in latest release, you can compile this binary on your own from project by using command:

make build

The compiled binary lies in bin/{os}/{arch}/envoy-gateway.

Create Certificates

All runners in Envoy Gateway are using TLS connection, so create these TLS certificates locally to ensure the Envoy Gateway works properly.

envoy-gateway certgen --local

Start Envoy Gateway

Start Envoy Gateway by the following command:

envoy-gateway server --config-path standalone.yaml

with standalone.yaml configuration:

apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
gateway:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
provider:
  type: Custom
  custom:
    resource:
      type: File
      file:
        paths: ["/tmp/envoy-gateway-test"]
    infrastructure:
      type: Host
      host: {}
logging:
  level:
    default: info
extensionApis:
  enableBackend: true

As you can see, we have enabled the Backend API, this API will be used to represent our local endpoints.

Trigger an Update

Any changes under watched paths will be considered as an update by the file provider.

For instance, copying example file into /tmp/envoy-gateway-test/ will trigger an update of gateway-api resources:

cp examples/standalone/quickstart.yaml /tmp/envoy-gateway-test/quickstart.yaml

From the Envoy Gateway log, you should be able to observe that the Envoy Proxy has been started, and its admin address has been returned.

Test Connection

Starts a simple local server as an endpoint:

python3 -m http.server 3000

Curl the example server through Envoy Proxy:

curl --verbose --header "Host: www.example.com" http://0.0.0.0:8888/
*   Trying 0.0.0.0:8888...
* Connected to 0.0.0.0 (127.0.0.1) port 8888 (#0)
> GET / HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: SimpleHTTP/0.6 Python/3.10.12
< date: Sat, 26 Oct 2024 13:20:34 GMT
< content-type: text/html; charset=utf-8
< content-length: 1870
<
...
* Connection #0 to host 0.0.0.0 left intact

6.4 - Use egctl

egctl is a command line tool to provide additional functionality for Envoy Gateway users.

egctl experimental translate

This subcommand allows users to translate from an input configuration type to an output configuration type.

The translate subcommand can translate Kubernetes resources to:

  • Gateway API resources This is useful in order to see how validation would occur if these resources were applied to Kubernetes.

    Use the --to gateway-api parameter to translate to Gateway API resources.

  • Envoy Gateway intermediate representation (IR) This represents Envoy Gateway’s translation of the Gateway API resources.

    Use the --to ir parameter to translate to Envoy Gateway intermediate representation.

  • Envoy Proxy xDS This is the xDS configuration provided to Envoy Proxy.

    Use the --to xds parameter to translate to Envoy Proxy xDS.

In the below example, we will translate the Kubernetes resources (including the Gateway API resources) into xDS resources.

cat <<EOF | egctl x translate --from gateway-api --to xds -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
  namespace: default
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      protocol: HTTP
      port: 80
---
apiVersion: v1
kind: Namespace
metadata:
  name: default 
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: default
  labels:
    app: backend
    service: backend
spec:
  clusterIP: "1.1.1.1"
  type: ClusterIP
  ports:
    - name: http
      port: 3000
      targetPort: 3000
      protocol: TCP
  selector:
    app: backend
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF
configKey: default-eg
configs:
- '@type': type.googleapis.com/envoy.admin.v3.BootstrapConfigDump
  bootstrap:
    admin:
      accessLog:
      - name: envoy.access_loggers.file
        typedConfig:
          '@type': type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
          path: /dev/null
      address:
        socketAddress:
          address: 127.0.0.1
          portValue: 19000
    dynamicResources:
      cdsConfig:
        apiConfigSource:
          apiType: DELTA_GRPC
          grpcServices:
          - envoyGrpc:
              clusterName: xds_cluster
          setNodeOnFirstMessageOnly: true
          transportApiVersion: V3
        resourceApiVersion: V3
      ldsConfig:
        apiConfigSource:
          apiType: DELTA_GRPC
          grpcServices:
          - envoyGrpc:
              clusterName: xds_cluster
          setNodeOnFirstMessageOnly: true
          transportApiVersion: V3
        resourceApiVersion: V3
    layeredRuntime:
      layers:
      - name: runtime-0
        rtdsLayer:
          name: runtime-0
          rtdsConfig:
            apiConfigSource:
              apiType: DELTA_GRPC
              grpcServices:
              - envoyGrpc:
                  clusterName: xds_cluster
              transportApiVersion: V3
            resourceApiVersion: V3
    staticResources:
      clusters:
      - connectTimeout: 10s
        loadAssignment:
          clusterName: xds_cluster
          endpoints:
          - lbEndpoints:
            - endpoint:
                address:
                  socketAddress:
                    address: envoy-gateway
                    portValue: 18000
        name: xds_cluster
        transportSocket:
          name: envoy.transport_sockets.tls
          typedConfig:
            '@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
            commonTlsContext:
              tlsCertificateSdsSecretConfigs:
              - name: xds_certificate
                sdsConfig:
                  pathConfigSource:
                    path: /sds/xds-certificate.json
                  resourceApiVersion: V3
              tlsParams:
                tlsMaximumProtocolVersion: TLSv1_3
              validationContextSdsSecretConfig:
                name: xds_trusted_ca
                sdsConfig:
                  pathConfigSource:
                    path: /sds/xds-trusted-ca.json
                  resourceApiVersion: V3
        type: STRICT_DNS
        typedExtensionProtocolOptions:
          envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
            '@type': type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
            explicitHttpConfig:
              http2ProtocolOptions: {}
- '@type': type.googleapis.com/envoy.admin.v3.ClustersConfigDump
  dynamicActiveClusters:
  - cluster:
      '@type': type.googleapis.com/envoy.config.cluster.v3.Cluster
      commonLbConfig:
        localityWeightedLbConfig: {}
      connectTimeout: 10s
      dnsLookupFamily: V4_ONLY
      loadAssignment:
        clusterName: default-backend-rule-0-match-0-www.example.com
        endpoints:
        - lbEndpoints:
          - endpoint:
              address:
                socketAddress:
                  address: 1.1.1.1
                  portValue: 3000
            loadBalancingWeight: 1
          loadBalancingWeight: 1
          locality: {}
      name: default-backend-rule-0-match-0-www.example.com
      outlierDetection: {}
      type: STATIC
- '@type': type.googleapis.com/envoy.admin.v3.ListenersConfigDump
  dynamicListeners:
  - activeState:
      listener:
        '@type': type.googleapis.com/envoy.config.listener.v3.Listener
        accessLog:
        - filter:
            responseFlagFilter:
              flags:
              - NR
          name: envoy.access_loggers.file
          typedConfig:
            '@type': type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
            path: /dev/stdout
        address:
          socketAddress:
            address: 0.0.0.0
            portValue: 10080
        defaultFilterChain:
          filters:
          - name: envoy.filters.network.http_connection_manager
            typedConfig:
              '@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
              accessLog:
              - name: envoy.access_loggers.file
                typedConfig:
                  '@type': type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
                  path: /dev/stdout
              httpFilters:
              - name: envoy.filters.http.router
                typedConfig:
                  '@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
              rds:
                configSource:
                  apiConfigSource:
                    apiType: DELTA_GRPC
                    grpcServices:
                    - envoyGrpc:
                        clusterName: xds_cluster
                    setNodeOnFirstMessageOnly: true
                    transportApiVersion: V3
                  resourceApiVersion: V3
                routeConfigName: default-eg-http
              statPrefix: http
              upgradeConfigs:
              - upgradeType: websocket
              useRemoteAddress: true
        name: default-eg-http
- '@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump
  dynamicRouteConfigs:
  - routeConfig:
      '@type': type.googleapis.com/envoy.config.route.v3.RouteConfiguration
      name: default-eg-http
      virtualHosts:
      - domains:
        - www.example.com
        name: default-eg-http-www.example.com
        routes:
        - match:
            prefix: /
          route:
            cluster: default-backend-rule-0-match-0-www.example.com
resourceType: all

You can also use the --type/-t flag to retrieve specific output types. In the below example, we will translate the Kubernetes resources (including the Gateway API resources) into xDS route resources.

cat <<EOF | egctl x translate --from gateway-api --to xds -t route -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
  namespace: default
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      protocol: HTTP
      port: 80
---
apiVersion: v1
kind: Namespace
metadata:
  name: default 
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: default
  labels:
    app: backend
    service: backend
spec:
  clusterIP: "1.1.1.1"
  type: ClusterIP
  ports:
    - name: http
      port: 3000
      targetPort: 3000
      protocol: TCP
  selector:
    app: backend
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF
'@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump
configKey: default-eg
dynamicRouteConfigs:
- routeConfig:
    '@type': type.googleapis.com/envoy.config.route.v3.RouteConfiguration
    name: default-eg-http
    virtualHosts:
    - domains:
      - www.example.com
      name: default-eg-http
      routes:
      - match:
          prefix: /
        route:
          cluster: default-backend-rule-0-match-0-www.example.com
resourceType: route

Add Missing Resources

You can pass the --add-missing-resources flag to use dummy non Gateway API resources instead of specifying them explicitly.

For example, this will provide the similar result as the above:

cat <<EOF | egctl x translate --add-missing-resources --from gateway-api --to gateway-api -t route -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
  namespace: default
spec:
  gatewayClassName: eg
  listeners:
    - name: http
      protocol: HTTP
      port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
  namespace: default
spec:
  parentRefs:
    - name: eg
  hostnames:
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF

You can see the output contains a EnvoyProxy resource that can be used as a starting point to modify the xDS bootstrap resource for the managed Envoy Proxy fleet.

envoyProxy:
  metadata:
    creationTimestamp: null
    name: default-envoy-proxy
    namespace: envoy-gateway-system
  spec:
    bootstrap: |
      admin:
        access_log:
        - name: envoy.access_loggers.file
          typed_config:
            "@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
            path: /dev/null
        address:
          socket_address:
            address: 127.0.0.1
            port_value: 19000
      dynamic_resources:
        ads_config:
          api_type: DELTA_GRPC
          transport_api_version: V3
          grpc_services:
          - envoy_grpc:
              cluster_name: xds_cluster
          set_node_on_first_message_only: true
        lds_config:
          ads: {}
          resource_api_version: V3
        cds_config:
          ads: {}
          resource_api_version: V3
      static_resources:
        clusters:
        - connect_timeout: 10s
          load_assignment:
            cluster_name: xds_cluster
            endpoints:
            - lb_endpoints:
              - endpoint:
                  address:
                    socket_address:
                      address: envoy-gateway
                      port_value: 18000
          typed_extension_protocol_options:
            "envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
               "@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
               "explicit_http_config":
                 "http2_protocol_options": {}
          name: xds_cluster
          type: STRICT_DNS
          transport_socket:
            name: envoy.transport_sockets.tls
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
              common_tls_context:
                tls_params:
                  tls_maximum_protocol_version: TLSv1_3
                tls_certificate_sds_secret_configs:
                - name: xds_certificate
                  sds_config:
                    path_config_source:
                      path: "/sds/xds-certificate.json"
                    resource_api_version: V3
                validation_context_sds_secret_config:
                  name: xds_trusted_ca
                  sds_config:
                    path_config_source:
                      path: "/sds/xds-trusted-ca.json"
                    resource_api_version: V3
      layered_runtime:
        layers:
        - name: runtime-0
          rtds_layer:
            rtds_config:
              ads: {}
              resource_api_version: V3
            name: runtime-0      
    logging: {}
  status: {}
gatewayClass:
  metadata:
    creationTimestamp: null
    name: eg
    namespace: envoy-gateway-system
  spec:
    controllerName: gateway.envoyproxy.io/gatewayclass-controller
    parametersRef:
      group: gateway.envoyproxy.io
      kind: EnvoyProxy
      name: default-envoy-proxy
      namespace: envoy-gateway-system
  status:
    conditions:
    - lastTransitionTime: "2023-04-19T20:30:46Z"
      message: Valid GatewayClass
      reason: Accepted
      status: "True"
      type: Accepted
gateways:
- metadata:
    creationTimestamp: null
    name: eg
    namespace: default
  spec:
    gatewayClassName: eg
    listeners:
    - name: http
      port: 80
      protocol: HTTP
  status:
    listeners:
    - attachedRoutes: 1
      conditions:
      - lastTransitionTime: "2023-04-19T20:30:46Z"
        message: Sending translated listener configuration to the data plane
        reason: Programmed
        status: "True"
        type: Programmed
      - lastTransitionTime: "2023-04-19T20:30:46Z"
        message: Listener has been successfully translated
        reason: Accepted
        status: "True"
        type: Accepted
      name: http
      supportedKinds:
      - group: gateway.networking.k8s.io
        kind: HTTPRoute
      - group: gateway.networking.k8s.io
        kind: GRPCRoute
httpRoutes:
- metadata:
    creationTimestamp: null
    name: backend
    namespace: default
  spec:
    hostnames:
    - www.example.com
    parentRefs:
    - name: eg
    rules:
    - backendRefs:
      - group: ""
        kind: Service
        name: backend
        port: 3000
        weight: 1
      matches:
      - path:
          type: PathPrefix
          value: /
  status:
    parents:
    - conditions:
      - lastTransitionTime: "2023-04-19T20:30:46Z"
        message: Route is accepted
        reason: Accepted
        status: "True"
        type: Accepted
      - lastTransitionTime: "2023-04-19T20:30:46Z"
        message: Resolved all the Object references for the Route
        reason: ResolvedRefs
        status: "True"
        type: ResolvedRefs
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
      parentRef:
        name: eg

Sometimes you might find that egctl doesn’t provide an expected result. For example, the following example provides an empty route resource:

cat <<EOF | egctl x translate --from gateway-api --type route --to xds -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
    - name: tls
      protocol: TLS
      port: 8443
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF
xds:
  envoy-gateway-system-eg:
    '@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump

Validating Gateway API Configuration

You can add an additional target gateway-api to show the processed Gateway API resources. For example, translating the above resources with the new argument shows that the HTTPRoute is rejected because there is no ready listener for it:

cat <<EOF | egctl x translate --from gateway-api --type route --to gateway-api,xds -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: eg
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: eg
spec:
  gatewayClassName: eg
  listeners:
    - name: tls
      protocol: TLS
      port: 8443
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: eg
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /
EOF
gatewayClass:
  metadata:
    creationTimestamp: null
    name: eg
    namespace: envoy-gateway-system
  spec:
    controllerName: gateway.envoyproxy.io/gatewayclass-controller
  status:
    conditions:
    - lastTransitionTime: "2023-04-19T20:54:52Z"
      message: Valid GatewayClass
      reason: Accepted
      status: "True"
      type: Accepted
gateways:
- metadata:
    creationTimestamp: null
    name: eg
    namespace: envoy-gateway-system
  spec:
    gatewayClassName: eg
    listeners:
    - name: tls
      port: 8443
      protocol: TLS
  status:
    listeners:
    - attachedRoutes: 0
      conditions:
      - lastTransitionTime: "2023-04-19T20:54:52Z"
        message: Listener must have TLS set when protocol is TLS.
        reason: Invalid
        status: "False"
        type: Programmed
      name: tls
      supportedKinds:
      - group: gateway.networking.k8s.io
        kind: TLSRoute
httpRoutes:
- metadata:
    creationTimestamp: null
    name: backend
    namespace: envoy-gateway-system
  spec:
    parentRefs:
    - name: eg
    rules:
    - backendRefs:
      - group: ""
        kind: Service
        name: backend
        port: 3000
        weight: 1
      matches:
      - path:
          type: PathPrefix
          value: /
  status:
    parents:
    - conditions:
      - lastTransitionTime: "2023-04-19T20:54:52Z"
        message: There are no ready listeners for this parent ref
        reason: NoReadyListeners
        status: "False"
        type: Accepted
      - lastTransitionTime: "2023-04-19T20:54:52Z"
        message: Service envoy-gateway-system/backend not found
        reason: BackendNotFound
        status: "False"
        type: ResolvedRefs
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
      parentRef:
        name: eg
xds:
  envoy-gateway-system-eg:
    '@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump

egctl experimental status

This subcommand allows users to show the summary of the status of specific or all resource types, in order to quickly find out the status of any resources.

By default, egctl x status display all the conditions for one resource type. You can either add --quiet to only display the latest condition, or add --verbose to display more details about current status.

Some examples of this command after installing Multi-tenancy example manifest:

  • Show the summary of GatewayClass.
~ egctl x status gatewayclass

NAME           TYPE       STATUS    REASON
eg-marketing   Accepted   True      Accepted
eg-product     Accepted   True      Accepted
  • Show the summary of all resource types under all namespaces, the resource type with empty status will be ignored.
~ egctl x status all -A

NAME                        TYPE       STATUS    REASON
gatewayclass/eg-marketing   Accepted   True      Accepted
gatewayclass/eg-product     Accepted   True      Accepted

NAMESPACE   NAME         TYPE         STATUS    REASON
marketing   gateway/eg   Programmed   True      Programmed
                         Accepted     True      Accepted
product     gateway/eg   Programmed   True      Programmed
                         Accepted     True      Accepted

NAMESPACE   NAME                PARENT       TYPE           STATUS    REASON
marketing   httproute/backend   gateway/eg   ResolvedRefs   True      ResolvedRefs
                                             Accepted       True      Accepted
product     httproute/backend   gateway/eg   ResolvedRefs   True      ResolvedRefs
                                             Accepted       True      Accepted
  • Show the summary of all the Gateways with details under all namespaces.
~ egctl x status gateway --verbose --all-namespaces

NAMESPACE   NAME      TYPE         STATUS    REASON       MESSAGE                                                                    OBSERVED GENERATION   LAST TRANSITION TIME
marketing   eg        Programmed   True      Programmed   Address assigned to the Gateway, 1/1 envoy Deployment replicas available   1                     2024-02-02 18:17:14 +0800 CST
                      Accepted     True      Accepted     The Gateway has been scheduled by Envoy Gateway                            1                     2024-02-01 17:50:39 +0800 CST
product     eg        Programmed   True      Programmed   Address assigned to the Gateway, 1/1 envoy Deployment replicas available   1                     2024-02-02 18:17:14 +0800 CST
                      Accepted     True      Accepted     The Gateway has been scheduled by Envoy Gateway                            1                     2024-02-01 17:52:42 +0800 CST
  • Show the summary of the latest Gateways condition under product namespace.
~ egctl x status gateway --quiet -n product

NAME      TYPE         STATUS    REASON
eg        Programmed   True      Programmed
  • Show the summary of latest HTTPRoutes condition under all namespaces.
~ egctl x status httproute --quiet --all-namespaces

NAMESPACE   NAME      PARENT       TYPE           STATUS    REASON
marketing   backend   gateway/eg   ResolvedRefs   True      ResolvedRefs
product     backend   gateway/eg   ResolvedRefs   True      ResolvedRefs

egctl experimental dashboard

This subcommand streamlines the process for users to access the Envoy admin dashboard. By executing the following command:

egctl x dashboard envoy-proxy -n envoy-gateway-system envoy-engw-eg-a9c23fbb-558f94486c-82wh4

You will see the following output:

egctl x dashboard envoy-proxy -n envoy-gateway-system envoy-engw-eg-a9c23fbb-558f94486c-82wh4
http://localhost:19000

the Envoy admin dashboard will automatically open in your default web browser. This eliminates the need to manually locate and expose the admin port.

egctl experimental install

This subcommand can be used to install envoy-gateway.

egctl x install

By default, this will install both the envoy-gateway workload resource and the required gateway-api and envoy-gatewayCRDs.

We can specify to install only workload resources via --skip-crds

egctl x install --skip-crds

We can specify to install only CRDs resources via --only-crds

egctl x install --only-crds

We can specify --name and --namespace to install envoy-gateway in different places to support multi-tenant mode.

Note: If CRDs are already installed, then we need to specify --skip-crds to avoid repeated installation of CRDs resources.

egctl x install --name shop-backend --namespace shop

egctl experimental uninstall

This subcommand can be used to uninstall envoy-gateway.

egctl x uninstall

By default, this will only uninstall the envoy-gateway workload resource, if we want to also uninstall CRDs, we need to specify --with-crds

egctl x uninstall --with-crds