This is the multi-page printable view of this section. Click here to print.
Traffic
- 1: Backend Routing
- 2: Circuit Breakers
- 3: Client Traffic Policy
- 4: Connection Limit
- 5: Direct Response
- 6: Failover
- 7: Fault Injection
- 8: Gateway Address
- 9: Gateway API Support
- 10: Global Rate Limit
- 11: GRPC Routing
- 12: HTTP Redirects
- 13: HTTP Request Headers
- 14: HTTP Response Headers
- 15: HTTP Routing
- 16: HTTP Timeouts
- 17: HTTP URL Rewrite
- 18: HTTP3
- 19: HTTPRoute Request Mirroring
- 20: HTTPRoute Traffic Splitting
- 21: Load Balancing
- 22: Local Rate Limit
- 23: Multicluster Service Routing
- 24: Response Override
- 25: Retry
- 26: Routing outside Kubernetes
- 27: TCP Routing
- 28: UDP Routing
1 - Backend Routing
Envoy Gateway supports routing to native K8s resources such as Service
and ServiceImport
. The Backend
API is a custom Envoy Gateway extension resource that can used in Gateway-API BackendObjectReference.
Motivation
The Backend API was added to support several use cases:
- Allowing users to integrate Envoy with services (Ext Auth, Rate Limit, ALS, …) using Unix Domain Sockets, which are currently not supported by K8s.
- Simplify routing to cluster-external backends, which currently requires users to maintain both K8s
Service
andEndpointSlice
resources.
Warning
Similar to the K8s EndpointSlice API, the Backend API can be misused to allow traffic to be sent to otherwise restricted destinations, as described in CVE-2021-25740. A Backend resource can be used to:
- Expose a Service or Pod that should not be accessible
- Reference a Service or Pod by a Route without appropriate Reference Grants
- Expose the Envoy Proxy localhost (including the Envoy admin endpoint)
For these reasons, the Backend API is disabled by default in Envoy Gateway configuration. Envoy Gateway admins are advised to follow upstream recommendations and restrict access to the Backend API using K8s RBAC.
Restrictions
The Backend API is currently supported only in the following BackendReferences:
- HTTPRoute: IP and FQDN endpoints
- TLSRoute: IP and FQDN endpoints
- Envoy Extension Policy (ExtProc): IP, FQDN and unix domain socket endpoints
- Security Policy: IP and FQDN endpoints for the OIDC providers
The Backend API supports attachment the following policies:
Certain restrictions apply on the value of hostnames and addresses. For example, the loopback IP address range and the localhost hostname are forbidden.
Envoy Gateway does not manage the lifecycle of unix domain sockets referenced by the Backend resource. Envoy Gateway admins are responsible for creating and mounting the sockets into the envoy proxy pod. The latter can be achieved by patching the envoy deployment using the EnvoyProxy resource.
Quickstart
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Enable Backend
By default Backend is disabled. Lets enable it in the EnvoyGateway startup configuration
The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it using a
ConfigMap
. In the next step, we will update this resource to enable Backend.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
extensionApis:
enableBackend: true
EOF
Save and apply the following resource to your cluster:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
extensionApis:
enableBackend: true
After updating the
ConfigMap
, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting theenvoy-gateway
deployment.kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
Testing
Route to External Backend
- In the following example, we will create a
Backend
resource that routes to httpbin.org:80 and aHTTPRoute
that references this backend.
cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: gateway.envoyproxy.io
kind: Backend
name: httpbin
matches:
- path:
type: PathPrefix
value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
name: httpbin
namespace: default
spec:
endpoints:
- fqdn:
hostname: httpbin.org
port: 80
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: gateway.envoyproxy.io
kind: Backend
name: httpbin
matches:
- path:
type: PathPrefix
value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
name: httpbin
namespace: default
spec:
endpoints:
- fqdn:
hostname: httpbin.org
port: 80
Get the Gateway address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Send a request and view the response:
curl -I -HHost:www.example.com http://${GATEWAY_HOST}/headers
2 - Circuit Breakers
Envoy circuit breakers can be used to fail quickly and apply back-pressure in response to upstream service degradation.
Envoy Gateway supports the following circuit breaker thresholds:
- Concurrent Connections: limit the connections that Envoy can establish to the upstream service. When this threshold is met, new connections will not be established, and some requests will be queued until an existing connection becomes available.
- Concurrent Requests: limit on concurrent requests in-flight from Envoy to the upstream service. When this threshold is met, requests will be queued.
- Pending Requests: limit the pending request queue size. When this threshold is met, overflowing requests will be terminated with a
503
status code.
Envoy’s circuit breakers are distributed: counters are not synchronized across different Envoy processes. The default Envoy and Envoy Gateway circuit breaker threshold values (1024) may be too strict for high-throughput systems.
Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired circuit breaker thresholds. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Note: There are distinct circuit breaker counters for each BackendReference
in an xRoute
rule. Even if a BackendTrafficPolicy
targets a Gateway
, each BackendReference
in that gateway still has separate circuit breaker counter.
Prerequisites
Install Envoy Gateway
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Install the hey load testing tool
- The
hey
CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.
Test and customize circuit breaker settings
This example will simulate a degraded backend that responds within 10 seconds by adding the ?delay=10s
query parameter to API calls. The hey tool will be used to generate 100 concurrent requests.
hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/?delay=10s
Summary:
Total: 10.3426 secs
Slowest: 10.3420 secs
Fastest: 10.0664 secs
Average: 10.2145 secs
Requests/sec: 9.6687
Total data: 36600 bytes
Size/request: 366 bytes
Response time histogram:
10.066 [1] |■■■■
10.094 [4] |■■■■■■■■■■■■■■■
10.122 [9] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.149 [10] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.177 [10] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.204 [11] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.232 [11] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.259 [11] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.287 [11] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.314 [11] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
10.342 [11] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
The default circuit breaker threshold (1024) is not met. As a result, requests do not overflow: all requests are proxied upstream and both Envoy and clients wait for 10s.
In order to fail fast, apply a BackendTrafficPolicy
that limits concurrent requests to 10 and pending requests to 0.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: circuitbreaker-for-route
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
circuitBreaker:
maxPendingRequests: 0
maxParallelRequests: 10
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: circuitbreaker-for-route
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
circuitBreaker:
maxPendingRequests: 0
maxParallelRequests: 10
Execute the load simulation again.
hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/?delay=10s
Summary:
Total: 10.1230 secs
Slowest: 10.1224 secs
Fastest: 0.0529 secs
Average: 1.0677 secs
Requests/sec: 9.8785
Total data: 10940 bytes
Size/request: 109 bytes
Response time histogram:
0.053 [1] |
1.060 [89] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
2.067 [0] |
3.074 [0] |
4.081 [0] |
5.088 [0] |
6.095 [0] |
7.102 [0] |
8.109 [0] |
9.115 [0] |
10.122 [10] |■■■■
With the new circuit breaker settings, and due to the slowness of the backend, only the first 10 concurrent requests were proxied, while the other 90 overflowed.
- Overflowing Requests failed fast, reducing proxy resource consumption.
- Upstream traffic was limited, alleviating the pressure on the degraded service.
3 - Client Traffic Policy
This task explains the usage of the ClientTrafficPolicy API.
Introduction
The ClientTrafficPolicy API allows system administrators to configure the behavior for how the Envoy Proxy server behaves with downstream clients.
Motivation
This API was added as a new policy attachment resource that can be applied to Gateway resources and it is meant to hold settings for configuring behavior of the connection between the downstream client and Envoy Proxy listener. It is the counterpart to the BackendTrafficPolicy API resource.
Quickstart
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Support TCP keepalive for downstream client
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-tcp-keepalive-policy
namespace: default
spec:
targetRef:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
tcpKeepalive:
idleTime: 20m
interval: 60s
probes: 3
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-tcp-keepalive-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
tcpKeepalive:
idleTime: 20m
interval: 60s
probes: 3
Verify that ClientTrafficPolicy is Accepted:
kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default
You should see the policy marked as accepted like this:
NAME STATUS AGE
enable-tcp-keepalive-policy Accepted 5s
Curl the example app through Envoy proxy once again:
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get --next --header "Host: www.example.com" http://$GATEWAY_HOST/get
You should see the output like this:
* Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 01 Dec 2023 10:17:04 GMT
< content-length: 507
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/get",
"host": "www.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/8.1.2"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"172.18.0.2"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"4d0d33e8-d611-41f0-9da0-6458eec20fa5"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}* Found bundle for host: 0x7fb9f5204ea0 [serially]
* Can not multiplex, even if we wanted to
* Re-using existing connection #0 with host 172.18.255.202
> GET /headers HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 01 Dec 2023 10:17:04 GMT
< content-length: 511
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/headers",
"host": "www.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/8.1.2"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"172.18.0.2"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"9a8874c0-c117-481c-9b04-933571732ca5"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}
You can see keepalive connection marked by the output in:
* Connection #0 to host 172.18.255.202 left intact
* Re-using existing connection #0 with host 172.18.255.202
Enable Proxy Protocol for downstream client
This example configures Proxy Protocol for downstream clients.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-proxy-protocol-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
enableProxyProtocol: true
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-proxy-protocol-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
enableProxyProtocol: true
Verify that ClientTrafficPolicy is Accepted:
kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default
You should see the policy marked as accepted like this:
NAME STATUS AGE
enable-proxy-protocol-policy Accepted 5s
Try the endpoint without using PROXY protocol with curl:
curl -v --header "Host: www.example.com" http://$GATEWAY_HOST/get
* Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Curl the example app through Envoy proxy once again, now sending HAProxy PROXY protocol header at the beginning of the connection with –haproxy-protocol flag:
curl --verbose --haproxy-protocol --header "Host: www.example.com" http://$GATEWAY_HOST/get
You should now expect 200 response status and also see that source IP was preserved in the X-Forwarded-For header.
* Trying 172.18.255.202:80...
* Connected to 172.18.255.202 (172.18.255.202) port 80 (#0)
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.1.2
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Mon, 04 Dec 2023 21:11:43 GMT
< content-length: 510
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/get",
"host": "www.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/8.1.2"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"192.168.255.6"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"290e4b61-44b7-4e5c-a39c-0ec76784e897"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-58d58f745-2zwvn"
* Connection #0 to host 172.18.255.202 left intact
}
Configure Client IP Detection
This example configures the number of additional ingress proxy hops from the right side of XFF HTTP headers to trust when determining the origin client’s IP address and determines whether or not x-forwarded-proto
headers will be trusted. Refer to https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_conn_man/headers#x-forwarded-for for details.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: http-client-ip-detection
namespace: default
spec:
targetRef:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
clientIPDetection:
xForwardedFor:
numTrustedHops: 2
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: http-client-ip-detection
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
clientIPDetection:
xForwardedFor:
numTrustedHops: 2
Verify that ClientTrafficPolicy is Accepted:
kubectl get clienttrafficpolicies.gateway.envoyproxy.io -n default
You should see the policy marked as accepted like this:
NAME STATUS AGE
http-client-ip-detection Accepted 5s
Open port-forward to the admin interface port:
kubectl port-forward deploy/${ENVOY_DEPLOYMENT} -n envoy-gateway-system 19000:19000
Curl the admin interface port to fetch the configured value for xff_num_trusted_hops
:
curl -s 'http://localhost:19000/config_dump?resource=dynamic_listeners' \
| jq -r '.configs[0].active_state.listener.default_filter_chain.filters[0].typed_config
| with_entries(select(.key | match("xff|remote_address|original_ip")))'
You should expect to see the following:
{
"use_remote_address": true,
"xff_num_trusted_hops": 2
}
Curl the example app through Envoy proxy:
curl -v http://$GATEWAY_HOST/get \
-H "Host: www.example.com" \
-H "X-Forwarded-Proto: https" \
-H "X-Forwarded-For: 1.1.1.1,2.2.2.2"
You should expect 200 response status, see that X-Forwarded-Proto
was preserved and X-Envoy-External-Address
was set to the leftmost address in the X-Forwarded-For
header:
* Trying [::1]:8888...
* Connected to localhost (::1) port 8888
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> X-Forwarded-Proto: https
> X-Forwarded-For: 1.1.1.1,2.2.2.2
>
Handling connection for 8888
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Tue, 30 Jan 2024 15:19:22 GMT
< content-length: 535
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/get",
"host": "www.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/8.4.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-External-Address": [
"1.1.1.1"
],
"X-Forwarded-For": [
"1.1.1.1,2.2.2.2,10.244.0.9"
],
"X-Forwarded-Proto": [
"https"
],
"X-Request-Id": [
"53ccfad7-1899-40fa-9322-ddb833aa1ac3"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-58d58f745-8psnc"
* Connection #0 to host localhost left intact
}
Enable HTTP Request Received Timeout
This feature allows you to limit the time taken by the Envoy Proxy fleet to receive the entire request from the client, which is useful in preventing certain clients from consuming too much memory in Envoy This example configures the HTTP request timeout for the client, please check out the details here.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: client-timeout
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
timeout:
http:
requestReceivedTimeout: 2s
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: client-timeout
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
timeout:
http:
requestReceivedTimeout: 2s
Curl the example app through Envoy proxy:
curl -v http://$GATEWAY_HOST/get \
-H "Host: www.example.com" \
-H "Content-Length: 10000"
You should expect 428
response status after 2s:
curl -v http://$GATEWAY_HOST/get \
-H "Host: www.example.com" \
-H "Content-Length: 10000"
* Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
> Content-Length: 10000
>
< HTTP/1.1 408 Request Timeout
< content-length: 15
< content-type: text/plain
< date: Tue, 27 Feb 2024 07:38:27 GMT
< connection: close
<
* Closing connection
request timeout
Configure Client HTTP Idle Timeout
The idle timeout is defined as the period in which there are no active requests. When the idle timeout is reached the connection will be closed. For more details see here.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: client-timeout
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
timeout:
http:
idleTimeout: 5s
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: client-timeout
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
timeout:
http:
idleTimeout: 5s
Curl the example app through Envoy proxy:
openssl s_client -crlf -connect $GATEWAY_HOST:443
You should expect the connection to be closed after 5s.
You can also check the number of connections closed due to idle timeout by using the following query:
envoy_http_downstream_cx_idle_timeout{envoy_http_conn_manager_prefix="<name of connection manager>"}
The number of connections closed due to idle timeout should be increased by 1.
Configure Downstream Per Connection Buffer Limit
This feature allows you to set a soft limit on size of the listener’s new connection read and write buffers.
The size is configured using the resource.Quantity
format, see examples here.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: client-timeout
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
connection:
bufferLimit: 1024
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: client-timeout
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
connection:
bufferLimit: 1024
4 - Connection Limit
The connection limit features allows users to limit the number of concurrently active TCP connections on a Gateway or a Listener. When the connection limit is reached, new connections are closed immediately by Envoy proxy. It’s possible to configure a delay for connection rejection.
Users may want to limit the number of connections for several reasons:
- Protect resources like CPU and Memory.
- Ensure that different listeners can receive a fair share of global resources.
- Protect from malicious activity like DoS attacks.
Envoy Gateway introduces a new CRD called Client Traffic Policy that allows the user to describe their desired connection limit settings. This instantiated resource can be linked to a Gateway.
The Envoy connection limit implementation is distributed: counters are not synchronized between different envoy proxies.
When a Client Traffic Policy is attached to a gateway, the connection limit will apply differently based on the Listener protocol in use:
- HTTP: all HTTP listeners in a Gateway will share a common connection counter, and a limit defined by the policy.
- HTTPS/TLS: each HTTPS/TLS listener will have a dedicated connection counter, and a limit defined by the policy.
Prerequisites
Install Envoy Gateway
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Install the hey load testing tool
- The
hey
CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.
Test and customize connection limit settings
This example we use hey
to open 10 connections and execute 1 RPS per connection for 10 seconds.
hey -c 10 -q 1 -z 10s -host "www.example.com" http://${GATEWAY_HOST}/get
Summary:
Total: 10.0058 secs
Slowest: 0.0275 secs
Fastest: 0.0029 secs
Average: 0.0111 secs
Requests/sec: 9.9942
[...]
Status code distribution:
[200] 100 responses
There are no connection limits, and so all 100 requests succeed.
Next, we apply a limit of 5 connections.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: connection-limit-ctp
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
connection:
connectionLimit:
value: 5
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: connection-limit-ctp
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
connection:
connectionLimit:
value: 5
Execute the load simulation again.
hey -c 10 -q 1 -z 10s -host "www.example.com" http://${GATEWAY_HOST}/get
Summary:
Total: 11.0327 secs
Slowest: 0.0361 secs
Fastest: 0.0013 secs
Average: 0.0088 secs
Requests/sec: 9.0640
[...]
Status code distribution:
[200] 50 responses
Error distribution:
[50] Get "http://localhost:8888/get": EOF
With the new connection limit, only 5 of 10 connections are established, and so only 50 requests succeed.
5 - Direct Response
Direct responses are valuable in cases where you want the gateway itself to handle certain requests without forwarding them to backend services. This task shows you how to configure them.
Installation
Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Testing Direct Response
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: direct-response
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /inline
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: direct-response-inline
- matches:
- path:
type: PathPrefix
value: /value-ref
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: direct-response-value-ref
---
apiVersion: v1
kind: ConfigMap
metadata:
name: value-ref-response
data:
response.body: '{"error": "Internal Server Error"}'
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: direct-response-inline
spec:
directResponse:
contentType: text/plain
statusCode: 503
body:
type: Inline
inline: "Oops! Your request is not found."
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: direct-response-value-ref
spec:
directResponse:
contentType: application/json
statusCode: 500
body:
type: ValueRef
valueRef:
group: ""
kind: ConfigMap
name: value-ref-response
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: direct-response
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /inline
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: direct-response-inline
- matches:
- path:
type: PathPrefix
value: /value-ref
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: direct-response-value-ref
---
apiVersion: v1
kind: ConfigMap
metadata:
name: value-ref-response
data:
response.body: '{"error": "Internal Server Error"}'
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: direct-response-inline
spec:
directResponse:
contentType: text/plain
statusCode: 503
body:
type: Inline
inline: "Oops! Your request is not found."
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: direct-response-value-ref
spec:
directResponse:
contentType: application/json
statusCode: 500
body:
type: ValueRef
valueRef:
group: ""
kind: ConfigMap
name: value-ref-response
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/inline
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET /inline HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< content-type: text/plain
< content-length: 32
< date: Sat, 02 Nov 2024 00:35:48 GMT
<
* Connection #0 to host 127.0.0.1 left intact
Oops! Your request is not found.
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/value-ref
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET /value-ref HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< content-type: application/json
< content-length: 34
< date: Sat, 02 Nov 2024 00:35:55 GMT
<
* Connection #0 to host 127.0.0.1 left intact
{"error": "Internal Server Error"}
6 - Failover
Active-passive failover in an API gateway setup is like having a backup plan in place to keep things running smoothly if something goes wrong. Here’s why it’s valuable:
Staying Online: When the main (or “active”) backend has issues or goes offline, the fallback (or “passive”) backend is ready to step in instantly. This helps keep your API accessible and your services running, so users don’t even notice any interruptions.
Automatic Switch Over: If a problem occurs, the system can automatically switch traffic over to the fallback backend. This avoids needing someone to jump in and fix things manually, which could take time and might even lead to mistakes.
Lower Costs: In an active-passive setup, the fallback backend doesn’t need to work all the time—it’s just on standby. This can save on costs (like cloud egress costs) compared to setups where both backend are running at full capacity.
Peace of Mind with Redundancy: Although the fallback backend isn’t handling traffic daily, it’s there as a safety net. If something happens with the primary backend, the backup can take over immediately, ensuring your service doesn’t skip a beat.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Test
- We’ll first create two services & deployments, called
active
andpassive
, representing anactive
andpassive
backend application.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: active
labels:
app: active
service: active
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: active
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: active
spec:
replicas: 1
selector:
matchLabels:
app: active
version: v1
template:
metadata:
labels:
app: active
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: active
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
name: passive
labels:
app: passive
service: passive
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: passive
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: passive
spec:
replicas: 1
selector:
matchLabels:
app: passive
version: v1
template:
metadata:
labels:
app: passive
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: passive
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
EOF
Save and apply the following resource to your cluster:
apiVersion: v1
kind: Service
metadata:
name: active
labels:
app: active
service: active
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: active
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: active
spec:
replicas: 1
selector:
matchLabels:
app: active
version: v1
template:
metadata:
labels:
app: active
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: active
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
name: passive
labels:
app: passive
service: passive
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: passive
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: passive
spec:
replicas: 1
selector:
matchLabels:
app: passive
version: v1
template:
metadata:
labels:
app: passive
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: passive
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Follow the instructions here to enable the Backend API
Create two Backend resources that are used to represent the
active
backend andpassive
backend. Note, we’ve setfallback: true
for thepassive
backend to indicate its a passive backend
cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
name: passive
spec:
fallback: true
endpoints:
- fqdn:
hostname: passive.default.svc.cluster.local
port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
name: active
spec:
endpoints:
- fqdn:
hostname: active.default.svc.cluster.local
port: 3000
---
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
name: passive
spec:
fallback: true
endpoints:
- fqdn:
hostname: passive.default.svc.cluster.local
port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: Backend
metadata:
name: active
spec:
endpoints:
- fqdn:
hostname: active.default.svc.cluster.local
port: 3000
---
- Lets create an HTTPRoute that can route to both these backends
cat <<EOF | kubectl apply -f -
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ha-example
namespace: default
spec:
hostnames:
- www.example.com
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
rules:
- backendRefs:
- group: gateway.envoyproxy.io
kind: Backend
name: active
namespace: default
port: 3000
- group: gateway.envoyproxy.io
kind: Backend
name: passive
namespace: default
port: 3000
matches:
- path:
type: PathPrefix
value: /test
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ha-example
namespace: default
spec:
hostnames:
- www.example.com
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
rules:
- backendRefs:
- group: gateway.envoyproxy.io
kind: Backend
name: active
namespace: default
port: 3000
- group: gateway.envoyproxy.io
kind: Backend
name: passive
namespace: default
port: 3000
matches:
- path:
type: PathPrefix
value: /test
- Lets configure a
BackendTrafficPolicy
with a passive health check setting to detect an transient errors.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: passive-health-check
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: ha-example
healthCheck:
passive:
baseEjectionTime: 10s
interval: 2s
maxEjectionPercent: 100
consecutive5XxErrors: 1
consecutiveGatewayErrors: 0
consecutiveLocalOriginFailures: 1
splitExternalLocalOriginErrors: false
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: passive-health-check
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: ha-example
healthCheck:
passive:
baseEjectionTime: 10s
interval: 2s
maxEjectionPercent: 100
consecutive5XxErrors: 1
consecutiveGatewayErrors: 0
consecutiveLocalOriginFailures: 1
splitExternalLocalOriginErrors: false
- Lets send 10 requests. You should see that they all go to the
active
backend.
for i in {1..10; do curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/test 2>/dev/null | jq .pod; done
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
"active-5bb896774f-lz8s9"
- Lets simulate a failure in the
active
backend by changing the server listening port to5000
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: active
spec:
replicas: 1
selector:
matchLabels:
app: active
version: v1
template:
metadata:
labels:
app: active
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: active
ports:
- containerPort: 3000
env:
- name: HTTP_PORT
value: "5000"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
EOF
Save and apply the following resource to your cluster:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: active
spec:
replicas: 1
selector:
matchLabels:
app: active
version: v1
template:
metadata:
labels:
app: active
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: active
ports:
- containerPort: 3000
env:
- name: HTTP_PORT
value: "5000"
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- Lets send 10 requests again. You should see them all being sent to the
passive
backend
for i in {1..10; do curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/test 2>/dev/null | jq .pod; done
parse error: Invalid numeric literal at line 1, column 9
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
"passive-7ddbf945c9-wkc4f"
The first error can be avoided by configuring retries.
7 - Fault Injection
Envoy fault injection can be used to inject delays and abort requests to mimic failure scenarios such as service failures and overloads.
Envoy Gateway supports the following fault scenarios:
- delay fault: inject a custom fixed delay into the request with a certain probability to simulate delay failures.
- abort fault: inject a custom response code into the response with a certain probability to simulate abort failures.
Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired fault scenarios. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
For GRPC - follow the steps from the GRPC Routing example.
Install the hey load testing tool
- The
hey
CLI will be used to generate load and measure response times. Follow the installation instruction from the Hey project docs.
Configuration
Allow requests with a valid faultInjection by creating an BackendTrafficPolicy and attaching it to the example HTTPRoute or GRPCRoute.
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: fault-injection-50-percent-abort
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: foo
faultInjection:
abort:
httpStatus: 501
percentage: 50
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: fault-injection-delay
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: bar
faultInjection:
delay:
fixedDelay: 2s
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: foo
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /foo
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: bar
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /bar
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: fault-injection-50-percent-abort
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: foo
faultInjection:
abort:
httpStatus: 501
percentage: 50
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: fault-injection-delay
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: bar
faultInjection:
delay:
fixedDelay: 2s
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: foo
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /foo
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: bar
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /bar
Two HTTPRoute resources were created, one for /foo
and another for /bar
. fault-injection-abort
BackendTrafficPolicy has been created and targeted HTTPRoute foo to abort requests for /foo
. fault-injection-delay
BackendTrafficPolicy has been created and targeted HTTPRoute foo to delay 2s
requests for /bar
.
Verify the HTTPRoute configuration and status:
kubectl get httproute/foo -o yaml
kubectl get httproute/bar -o yaml
Verify the BackendTrafficPolicy configuration:
kubectl get backendtrafficpolicy/fault-injection-50-percent-abort -o yaml
kubectl get backendtrafficpolicy/fault-injection-delay -o yaml
GRPCRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: fault-injection-abort
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: GRPCRoute
name: yages
faultInjection:
abort:
grpcStatus: 14
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: fault-injection-abort
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: GRPCRoute
name: yages
faultInjection:
abort:
grpcStatus: 14
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
A BackendTrafficPolicy has been created and targeted GRPCRoute yages to abort requests for yages
service..
Verify the GRPCRoute configuration and status:
kubectl get grpcroute/yages -o yaml
Verify the SecurityPolicy configuration:
kubectl get backendtrafficpolicy/fault-injection-abort -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
HTTPRoute
Verify that requests to foo
route are aborted.
hey -n 1000 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/foo
Status code distribution:
[200] 501 responses
[501] 499 responses
Verify that requests to bar
route are delayed.
hey -n 1000 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/bar
Summary:
Total: 20.1493 secs
Slowest: 2.1020 secs
Fastest: 1.9940 secs
Average: 2.0123 secs
Requests/sec: 49.6295
Total data: 557000 bytes
Size/request: 557 bytes
Response time histogram:
1.994 [1] |
2.005 [475] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
2.016 [419] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
2.026 [5] |
2.037 [0] |
2.048 [0] |
2.059 [30] |■■■
2.070 [0] |
2.080 [0] |
2.091 [11] |■
2.102 [59] |■■■■■
GRPCRoute
Verify that requests to yages
service are aborted.
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
You should see the below response
Error invoking method "yages.Echo/Ping": rpc error: code = Unavailable desc = failed to query for service descriptor "yages.Echo": fault filter abort
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the BackendTrafficPolicy:
kubectl delete BackendTrafficPolicy/fault-injection-abort
8 - Gateway Address
The Gateway API provides an optional Addresses field through which Envoy Gateway can set addresses for Envoy Proxy Service. Depending on the Service Type, the addresses of gateway can be used as:
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
External IPs
Using the addresses in Gateway.Spec.Addresses
as the External IPs of Envoy Proxy Service,
this will require the address to be of type IPAddress
and the ServiceType to be of LoadBalancer
or NodePort
.
The Envoy Gateway deploys Envoy Proxy Service as LoadBalancer
by default,
so you can set the address of the Gateway directly (the address settings here are for reference only):
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/addresses
value:
- type: IPAddress
value: 1.2.3.4
'
Verify the Gateway status:
kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
eg eg 1.2.3.4 True 14m
Verify the Envoy Proxy Service status:
kubectl get service -n envoy-gateway-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
envoy-default-eg-64656661 LoadBalancer 10.96.236.219 1.2.3.4 80:31017/TCP 15m
envoy-gateway ClusterIP 10.96.192.76 <none> 18000/TCP 15m
envoy-gateway-metrics-service ClusterIP 10.96.124.73 <none> 8443/TCP 15m
Note: If the Gateway.Spec.Addresses
is explicitly set, it will be the only addresses that populates the Gateway status.
Cluster IP
Using the addresses in Gateway.Spec.Addresses
as the Cluster IP of Envoy Proxy Service,
this will require the address to be of type IPAddress
and the ServiceType to be of ClusterIP
.
9 - Gateway API Support
As mentioned in the system design document, Envoy Gateway’s managed data plane is configured dynamically through Kubernetes resources, primarily Gateway API objects. Envoy Gateway supports configuration using the following Gateway API resources.
GatewayClass
A GatewayClass represents a “class” of gateways, i.e. which Gateways should be managed by Envoy Gateway.
Envoy Gateway supports managing a single GatewayClass resource that matches its configured controllerName
and
follows Gateway API guidelines for resolving conflicts when multiple GatewayClasses exist with a matching
controllerName
.
Note: If specifying GatewayClass parameters reference, it must refer to an EnvoyProxy resource.
Gateway
When a Gateway resource is created that references the managed GatewayClass, Envoy Gateway will create and manage a new Envoy Proxy deployment. Gateway API resources that reference this Gateway will configure this managed Envoy Proxy deployment.
HTTPRoute
A HTTPRoute configures routing of HTTP traffic through one or more Gateways. The following HTTPRoute filters are supported by Envoy Gateway:
requestHeaderModifier
: RequestHeaderModifiers can be used to modify or add request headers before the request is proxied to its destination.responseHeaderModifier
: ResponseHeaderModifiers can be used to modify or add response headers before the response is sent back to the client.requestMirror
: RequestMirrors configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.requestRedirect
: RequestRedirects configure policied for how requests that match the HTTPRoute should be modified and then redirected.urlRewrite
: UrlRewrites allow for modification of the request’s hostname and path before it is proxied to its destination.extensionRef
: ExtensionRefs are used by Envoy Gateway to implement extended filters. Currently, Envoy Gateway supports rate limiting and request authentication filters. For more information about these filters, refer to the rate limiting and request authentication documentation.
Notes:
- The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other destinations such as arbitrary URLs is not possible.
- Only
requestHeaderModifier
andresponseHeaderModifier
filters are currently supported within HTTPBackendRef.
TCPRoute
A TCPRoute configures routing of raw TCP traffic through one or more Gateways. Traffic can be forwarded to the desired BackendRefs based on a TCP port number.
Note: A TCPRoute only supports proxying in non-transparent mode, i.e. the backend will see the source IP and port of the Envoy Proxy instance instead of the client.
UDPRoute
A UDPRoute configures routing of raw UDP traffic through one or more Gateways. Traffic can be forwarded to the desired BackendRefs based on a UDP port number.
Note: Similar to TCPRoutes, UDPRoutes only support proxying in non-transparent mode i.e. the backend will see the source IP and port of the Envoy Proxy instance instead of the client.
GRPCRoute
A GRPCRoute configures routing of gRPC requests through one or more Gateways. They offer request matching by hostname, gRPC service, gRPC method, or HTTP/2 Header. Envoy Gateway supports the following filters on GRPCRoutes to provide additional traffic processing:
requestHeaderModifier
: RequestHeaderModifiers can be used to modify or add request headers before the request is proxied to its destination.responseHeaderModifier
: ResponseHeaderModifiers can be used to modify or add response headers before the response is sent back to the client.requestMirror
: RequestMirrors configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.
Notes:
- The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other destinations such as arbitrary URLs is not currently possible.
- Only
requestHeaderModifier
andresponseHeaderModifier
filters are currently supported within GRPCBackendRef.
TLSRoute
A TLSRoute configures routing of TCP traffic through one or more Gateways. However, unlike TCPRoutes, TLSRoutes can match against TLS-specific metadata.
ReferenceGrant
A ReferenceGrant is used to allow a resource to reference another resource in a different namespace. Normally an
HTTPRoute created in namespace foo
is not allowed to reference a Service in namespace bar
. A ReferenceGrant permits
these types of cross-namespace references. Envoy Gateway supports the following ReferenceGrant use-cases:
- Allowing an HTTPRoute, GRPCRoute, TLSRoute, UDPRoute, or TCPRoute to reference a Service in a different namespace.
- Allowing an HTTPRoute’s
requestMirror
filter to include a BackendRef that references a Service in a different namespace. - Allowing a Gateway’s SecretObjectReference to reference a secret in a different namespace.
10 - Global Rate Limit
Rate limit is a feature that allows the user to limit the number of incoming requests to a predefined value based on attributes within the traffic flow.
Here are some reasons why you may want to implement Rate limits
- To prevent malicious activity such as DDoS attacks.
- To prevent applications and its resources (such as a database) from getting overloaded.
- To create API limits based on user entitlements.
Envoy Gateway supports two types of rate limiting: Global rate limiting and Local rate limiting.
Global rate limiting applies a shared rate limit to the traffic flowing through all the instances of Envoy proxies where it is configured. i.e. if the data plane has 2 replicas of Envoy running, and the rate limit is 10 requests/second, this limit is shared and will be hit if 5 requests pass through the first replica and 5 requests pass through the second replica within the same second.
Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their rate limit intent. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Note: Limit is applied per route. Even if a BackendTrafficPolicy targets a gateway, each route in that gateway still has a separate rate limit bucket. For example, if a gateway has 2 routes, and the limit is 100r/s, then each route has its own 100r/s rate limit bucket.
Prerequisites
Install Envoy Gateway
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Install Redis
- The global rate limit feature is based on Envoy Ratelimit which requires a Redis instance as its caching layer.
Lets install a Redis deployment in the
redis-system
namespce.
cat <<EOF | kubectl apply -f -
kind: Namespace
apiVersion: v1
metadata:
name: redis-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: redis-system
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis:6.0.6
imagePullPolicy: IfNotPresent
name: redis
resources:
limits:
cpu: 1500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: redis-system
labels:
app: redis
annotations:
spec:
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
EOF
Save and apply the following resources to your cluster:
---
kind: Namespace
apiVersion: v1
metadata:
name: redis-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: redis-system
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis:6.0.6
imagePullPolicy: IfNotPresent
name: redis
resources:
limits:
cpu: 1500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: redis-system
labels:
app: redis
annotations:
spec:
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
Enable Global Rate limit in Envoy Gateway
- The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it
using a
ConfigMap
. In the next step, we will update this resource to enable rate limit in Envoy Gateway as well as configure the URL for the Redis instance used for Global rate limiting.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
rateLimit:
backend:
type: Redis
redis:
url: redis.redis-system.svc.cluster.local:6379
EOF
Save and apply the following resource to your cluster:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
rateLimit:
backend:
type: Redis
redis:
url: redis.redis-system.svc.cluster.local:6379
After updating the
ConfigMap
, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting theenvoy-gateway
deployment.kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
Rate Limit Specific User
Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id
header
with a value set to one
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
limit:
requests: 3
unit: Hour
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
limit:
requests: 3
unit: Hour
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-ratelimit -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Let’s query ratelimit.example/get
4 times. We should receive a 200
response from the example Gateway for the first 3 requests
and then receive a 429
status code for the 4th request since the limit is set at 3 requests/Hour for the request which contains the header x-user-id
and value one
.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
You should be able to send requests with the x-user-id
header and a different value and receive successful responses from the server.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:36 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:37 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:38 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:39 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
Rate Limit Distinct Users Except Admin
Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the
value in the x-user-id
header. Here, user one
(recognised from the traffic flow using the header x-user-id
and value one
) will be rate limited at 3 requests/hour
and so will user two
(recognised from the traffic flow using the header x-user-id
and value two
). But if x-user-id
is admin
, it will not be rate limited even beyond 3 requests/hour.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- type: Distinct
name: x-user-id
- name: x-user-id
value: admin
invert: true
limit:
requests: 3
unit: Hour
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- type: Distinct
name: x-user-id
limit:
requests: 3
unit: Hour
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
Lets run the same command again with the header x-user-id
and value one
set in the request. We should the first 3 requests succeeding and
the 4th request being rate limited.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
You should see the same behavior when the value for header x-user-id
is set to two
and 4 requests are sent.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
But when the value for header x-user-id
is set to admin
and 4 requests are sent, all 4 of them should respond with 200 OK.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: admin" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
Rate Limit All Requests
This example shows you how to rate limit all requests matching the HTTPRoute rule at 3 requests/Hour by leaving the clientSelectors
field unset.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- limit:
requests: 3
unit: Hour
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- limit:
requests: 3
unit: Hour
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
Rate Limit Client IP Addresses
Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on their
IP address (also reflected in the X-Forwarded-For
header).
Note: EG supports two kinds of rate limit for the IP address: Exact and Distinct.
- Exact means that all IP addresses within the specified Source IP CIDR share the same rate limit bucket.
- Distinct means that each IP address within the specified Source IP CIDR has its own rate limit bucket.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- sourceCIDR:
value: 0.0.0.0/0
type: Distinct
limit:
requests: 3
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- sourceCIDR:
value: 0.0.0.0/0
type: Distinct
limit:
requests: 3
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:45 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:46 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:48 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Tue, 28 Mar 2023 08:28:48 GMT
server: envoy
transfer-encoding: chunked
Rate Limit Jwt Claims
Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the value of the Jwt claims carried.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: jwt-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
jwt:
providers:
- name: example
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
claimToHeaders:
- claim: name
header: x-claim-name
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-claim-name
value: John Doe
limit:
requests: 3
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /foo
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: jwt-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
jwt:
providers:
- name: example
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
claimToHeaders:
- claim: name
header: x-claim-name
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-claim-name
value: John Doe
limit:
requests: 3
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /foo
Get the JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
TOKEN1=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/with-different-claim.jwt -s) && echo "$TOKEN1" | cut -d '.' -f2 - | base64 --decode
Rate limit by carrying TOKEN
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:25 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:26 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:27 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Mon, 12 Jun 2023 12:00:28 GMT
server: envoy
transfer-encoding: chunked
No Rate Limit by carrying TOKEN1
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN1" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:34 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:35 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:36 GMT
content-length: 556
x-envoy-upstream-service-time: 1
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:37 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy
(Optional) Editing Kubernetes Resources settings for the Rate Limit Service
The default installation of Envoy Gateway installs a default EnvoyGateway configuration and provides the initial rate limit kubernetes resources settings. such as
replicas
is 1, requests resources cpu is100m
, memory is512Mi
. the others like containerimage
,securityContext
,env
and podannotations
andsecurityContext
can be modified by modifying theConfigMap
.tls.certificateRef
set the client certificate for redis server TLS connections.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
kubernetes:
rateLimitDeployment:
replicas: 1
container:
image: envoyproxy/ratelimit:master
env:
- name: CACHE_KEY_PREFIX
value: "eg:rl:"
resources:
requests:
cpu: 100m
memory: 512Mi
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
pod:
annotations:
key1: val1
key2: val2
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
rateLimit:
backend:
type: Redis
redis:
url: redis.redis-system.svc.cluster.local:6379
tls:
certificateRef:
name: ratelimit-cert
EOF
Save and apply the following resource to your cluster:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
kubernetes:
rateLimitDeployment:
replicas: 1
container:
image: envoyproxy/ratelimit:master
env:
- name: CACHE_KEY_PREFIX
value: "eg:rl:"
resources:
requests:
cpu: 100m
memory: 512Mi
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
pod:
annotations:
key1: val1
key2: val2
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
rateLimit:
backend:
type: Redis
redis:
url: redis.redis-system.svc.cluster.local:6379
tls:
certificateRef:
name: ratelimit-cert
After updating the
ConfigMap
, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting theenvoy-gateway
deployment.kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
11 - GRPC Routing
The GRPCRoute resource allows users to configure gRPC routing by matching HTTP/2 traffic and forwarding it to backend gRPC servers. To learn more about gRPC routing, refer to the Gateway API documentation.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Installation
Install the gRPC routing example resources:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/grpc-routing.yaml
The manifest installs a GatewayClass, Gateway, a Deployment, a Service, and a GRPCRoute resource. The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.
Note: Envoy Gateway is configured by default to manage a GatewayClass with
controllerName: gateway.envoyproxy.io/gatewayclass-controller
.
Verification
Check the status of the GatewayClass:
kubectl get gc --selector=example=grpc-routing
The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.
A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is
provisioned or configured by Envoy Gateway. The gatewayClassName
defines the name of a GatewayClass used by this
Gateway. Check the status of the Gateway:
kubectl get gateways --selector=example=grpc-routing
The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also provides the address of the Gateway. This address is used later to test connectivity to proxied backend services.
Check the status of the GRPCRoute:
kubectl get grpcroutes --selector=example=grpc-routing -o yaml
The status for the GRPCRoute should surface “Accepted=True” and a parentRef
that references the example Gateway.
The example-route
matches any traffic for “grpc-example.com” and forwards it to the “yages” Service.
Testing the Configuration
Before testing GRPC routing to the yages
backend, get the Gateway’s address.
export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')
Test GRPC routing to the yages
backend using the grpcurl command.
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
You should see the below response
{
"text": "pong"
}
Envoy Gateway also supports gRPC-Web requests for this configuration. The below curl
command can be used to send a grpc-Web request with over HTTP/2. You should receive the same response seen in the previous command.
The data in the body AAAAAAA=
is a base64 encoded representation of an empty message (data length 0) that the Ping RPC accepts.
curl --http2-prior-knowledge -s ${GATEWAY_HOST}:80/yages.Echo/Ping -H 'Host: grpc-example.com' -H 'Content-Type: application/grpc-web-text' -H 'Accept: application/grpc-web-text' -XPOST -d'AAAAAAA=' | base64 -d
GRPCRoute Match
The matches
field can be used to restrict the route to a specific set of requests based on GRPC’s service and/or method names.
It supports two match types: Exact
and RegularExpression
.
Exact
Exact
match is the default match type.
The following example shows how to match a request based on the service and method names for grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo
,
as well as a match for all services with a method name Ping
which matches yages.Echo/Ping
in our deployment.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- matches:
- method:
method: ServerReflectionInfo
service: grpc.reflection.v1alpha.ServerReflection
- method:
method: Ping
backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- matches:
- method:
method: ServerReflectionInfo
service: grpc.reflection.v1alpha.ServerReflection
- method:
method: Ping
backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
Verify the GRPCRoute status:
kubectl get grpcroutes --selector=example=grpc-routing -o yaml
Test GRPC routing to the yages
backend using the grpcurl command.
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
RegularExpression
The following example shows how to match a request based on the service and method names
with match type RegularExpression
. It matches all the services and methods with pattern
/.*.Echo/Pin.+
, which matches yages.Echo/Ping
in our deployment.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- matches:
- method:
method: ServerReflectionInfo
service: grpc.reflection.v1alpha.ServerReflection
- method:
method: "Pin.+"
service: ".*.Echo"
type: RegularExpression
backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- matches:
- method:
method: ServerReflectionInfo
service: grpc.reflection.v1alpha.ServerReflection
- method:
method: "Pin.+"
service: ".*.Echo"
type: RegularExpression
backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
Verify the GRPCRoute status:
kubectl get grpcroutes --selector=example=grpc-routing -o yaml
Test GRPC routing to the yages
backend using the grpcurl command.
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
12 - HTTP Redirects
The HTTPRoute resource can issue redirects to clients or rewrite paths sent upstream using filters. Note that
HTTPRoute rules cannot use both filter types at once. Currently, Envoy Gateway only supports core
HTTPRoute filters which consist of RequestRedirect
and RequestHeaderModifier
at the time of this writing. To
learn more about HTTP routing, refer to the Gateway API documentation.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Redirects
Redirects return HTTP 3XX responses to a client, instructing it to retrieve a different resource. A
RequestRedirect
filter instructs Gateways to emit a redirect response to requests that match the rule.
For example, to issue a permanent redirect (301) from HTTP to HTTPS, configure requestRedirect.statusCode=301
and
requestRedirect.scheme="https"
:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-to-https-filter-redirect
spec:
parentRefs:
- name: eg
hostnames:
- redirect.example
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
hostname: www.example.com
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-to-https-filter-redirect
spec:
parentRefs:
- name: eg
hostnames:
- redirect.example
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
hostname: www.example.com
Note: 301
(default) and 302
are the only supported statusCodes.
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-to-https-filter-redirect -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying redirect.example/get
should result in a 301
response from the example Gateway and redirecting to the
configured redirect hostname.
$ curl -L -vvv --header "Host: redirect.example" "http://${GATEWAY_HOST}/get"
...
< HTTP/1.1 301 Moved Permanently
< location: https://www.example.com/get
...
If you followed the steps in the Secure Gateways task, you should be able to curl the redirect location.
HTTP –> HTTPS
Listeners expose the TLS setting on a per domain or subdomain basis. TLS settings of a listener are applied to all domains that satisfy the hostname criteria.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/CN=example.com' -keyout CA.key -out CA.crt
openssl req -out example.com.csr -newkey rsa:2048 -nodes -keyout tls.key -subj "/CN=example.com"
Generate a self-signed wildcard certificate for example.com
with *.example.com
extension
cat <<EOF | openssl x509 -req -days 365 -CA CA.crt -CAkey CA.key -set_serial 0 \
-subj "/CN=example.com" \
-in example.com.csr -out tls.crt -extensions v3_req -extfile -
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = example.com
DNS.2 = *.example.com
EOF
Create the kubernetes tls secret
kubectl create secret tls example-com --key=tls.key --cert=tls.crt
Define a https listener on the existing gateway
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: http
port: 80
protocol: HTTP
# hostname: "*.example.com"
- name: https
port: 443
protocol: HTTPS
# hostname: "*.example.com"
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: example-com
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: http
port: 80
protocol: HTTP
# hostname: "*.example.com"
- name: https
port: 443
protocol: HTTPS
# hostname: "*.example.com"
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: example-com
Check for any TLS certificate issues on the gateway.
kubectl -n default describe gateway eg
Create two HTTPRoutes and attach them to the HTTP and HTTPS listeners using the sectionName field.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tls-redirect
spec:
parentRefs:
- name: eg
sectionName: http
hostnames:
# - "*.example.com" # catch all hostnames
- "www.example.com"
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
parentRefs:
- name: eg
sectionName: https
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tls-redirect
spec:
parentRefs:
- name: eg
sectionName: http
hostnames:
# - "*.example.com" # catch all hostnames
- "www.example.com"
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
parentRefs:
- name: eg
sectionName: https
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
Curl the example app through http listener:
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get
Curl the example app through https listener:
curl -v -H 'Host:www.example.com' --resolve "www.example.com:443:$GATEWAY_HOST" \
--cacert CA.crt https://www.example.com:443/get
Path Redirects
Path redirects use an HTTP Path Modifier to replace either entire paths or path prefixes. For example, the HTTPRoute
below will issue a 302 redirect to all path.redirect.example
requests whose path begins with /get
to /status/200
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-path-redirect
spec:
parentRefs:
- name: eg
hostnames:
- path.redirect.example
rules:
- matches:
- path:
type: PathPrefix
value: /get
filters:
- type: RequestRedirect
requestRedirect:
path:
type: ReplaceFullPath
replaceFullPath: /status/200
statusCode: 302
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-path-redirect
spec:
parentRefs:
- name: eg
hostnames:
- path.redirect.example
rules:
- matches:
- path:
type: PathPrefix
value: /get
filters:
- type: RequestRedirect
requestRedirect:
path:
type: ReplaceFullPath
replaceFullPath: /status/200
statusCode: 302
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-path-redirect -o yaml
Querying path.redirect.example
should result in a 302
response from the example Gateway and a redirect location
containing the configured redirect path.
Query the path.redirect.example
host:
curl -vvv --header "Host: path.redirect.example" "http://${GATEWAY_HOST}/get"
You should receive a 302
with a redirect location of http://path.redirect.example/status/200
.
13 - HTTP Request Headers
The HTTPRoute resource can modify the headers of a request before forwarding it to the upstream service. HTTPRoute
rules cannot use both filter types at once. Currently, Envoy Gateway only supports core HTTPRoute filters which
consist of RequestRedirect
and RequestHeaderModifier
at the time of this writing. To learn more about HTTP routing,
refer to the Gateway API documentation.
A RequestHeaderModifier
filter instructs Gateways to modify the headers in requests that match the rule
before forwarding the request upstream. Note that the RequestHeaderModifier
filter will only modify headers before the
request is sent from Envoy to the upstream service and will not affect response headers returned to the downstream
client.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Adding Request Headers
The RequestHeaderModifier
filter can add new headers to a request before it is sent to the upstream. If the request
does not have the header configured by the filter, then that header will be added to the request. If the request already
has the header configured by the filter, then the value of the header in the filter will be appended to the value of the
header in the request.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: "add-header"
value: "foo"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: "add-header"
value: "foo"
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-headers -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the upstream example app received the header add-header
with the value:
something,foo
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"headers": {
"Accept": [
"*/*"
],
"Add-Header": [
"something",
"foo"
],
...
Setting Request Headers
Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it will be added, but unlike adding request headers which will append the value of the header if the request already contains it, setting a header will cause the value to be replaced by the value configured in the filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
set:
- name: "set-header"
value: "foo"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
set:
- name: "set-header"
value: "foo"
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the upstream example app received the header add-header
with the original value
something
replaced by foo
.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "set-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
"headers": {
"Accept": [
"*/*"
],
"Set-Header": [
"foo"
],
...
Removing Request Headers
Headers can be removed from a request by simply supplying a list of header names.
Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it will be added, but unlike adding request headers which will append the value of the header if the request already contains it, setting a header will cause the value to be replaced by the value configured in the filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
remove:
- "remove-header"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
remove:
- "remove-header"
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the upstream example app received the header add-header
, but the header
remove-header
that was sent by curl was removed before the upstream received the request.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something" --header "remove-header: foo"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
"headers": {
"Accept": [
"*/*"
],
"Add-Header": [
"something"
],
...
Combining Filters
Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: "add-header-1"
value: "foo"
set:
- name: "set-header-1"
value: "bar"
remove:
- "removed-header"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: "add-header-1"
value: "foo"
set:
- name: "set-header-1"
value: "bar"
remove:
- "removed-header"
Early Header Modification
In some cases, it could be necessary to modify headers before the proxy performs any sort of processing, routing or tracing. Envoy Gateway supports this functionality using the ClientTrafficPolicy API.
A ClientTrafficPolicy resource can be attached to a Gateway resource to configure early header modifications for all its routes. In the following example we will demonstrate how early header modification can be configured.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: early-added-header
value: late
- name: early-set-header
value: late
- name: early-removed-header
value: late
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-early-headers
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
headers:
earlyRequestHeaders:
add:
- name: "early-added-header"
value: "early"
set:
- name: "early-set-header"
value: "early"
remove:
- "early-removed-header"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: early-added-header
value: late
- name: early-set-header
value: late
- name: early-removed-header
value: late
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-early-headers
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
headers:
earlyRequestHeaders:
add:
- name: "early-added-header"
value: "early"
set:
- name: "early-set-header"
value: "early"
remove:
- "early-removed-header"
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the upstream example app received the following headers:
early-added-header
contains early (ClientTrafficPolicy) and late (RouteFilter) valuesearly-set-header
contains only early (ClientTrafficPolicy) and late (RouteFilter) values, since the early modification overwritten the client value.early-removed-header
contains only the late (RouteFilter) value, since the early modification deleted the client value.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "early-added-header: client" --header "early-set-header: client" --header "early-removed-header: client"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
"headers": {
"Accept": [
"*/*"
],
"Early-Added-Header": [
"client",
"early",
"late"
],
"Early-Set-Header": [
"early",
"late"
],
"Early-removed-Header": [
"late"
]
...
14 - HTTP Response Headers
The HTTPRoute resource can modify the headers of a response before responding it to the downstream service. To learn more about HTTP routing, refer to the Gateway API documentation.
A ResponseHeaderModifier
filter instructs Gateways to modify the headers in responses that match the
rule before responding to the downstream. Note that the ResponseHeaderModifier
filter will only modify headers before
the response is returned from Envoy to the downstream client and will not affect request headers forwarding to the
upstream service.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Adding Response Headers
The ResponseHeaderModifier
filter can add new headers to a response before it is sent to the upstream. If the response
does not have the header configured by the filter, then that header will be added to the response. If the response
already has the header configured by the filter, then the value of the header in the filter will be appended to the
value of the header in the response.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
add:
- name: "add-header"
value: "foo"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
add:
- name: "add-header"
value: "foo"
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-headers -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the downstream client received the header add-header
with the value: foo
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: X-Foo: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: X-Foo: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< x-foo: value1
< add-header: foo
<
...
"headers": {
"Accept": [
"*/*"
],
"X-Echo-Set-Header": [
"X-Foo: value1"
]
...
Setting Response Headers
Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it will be added, but unlike adding response headers which will append the value of the header if the response already contains it, setting a header will cause the value to be replaced by the value configured in the filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
set:
- name: "set-header"
value: "foo"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
set:
- name: "set-header"
value: "foo"
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the downstream client received the header set-header
with the original value value1
replaced by foo
.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: set-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: set-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< set-header: foo
<
"headers": {
"Accept": [
"*/*"
],
"X-Echo-Set-Header": [
"set-header": value1"
]
...
Removing Response Headers
Headers can be removed from a response by simply supplying a list of header names.
Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it will be added, but unlike adding response headers which will append the value of the header if the response already contains it, setting a header will cause the value to be replaced by the value configured in the filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
remove:
- "remove-header"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
remove:
- "remove-header"
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the header remove-header
that was sent by curl was removed before the upstream
received the response.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: remove-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: remove-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
"headers": {
"Accept": [
"*/*"
],
"X-Echo-Set-Header": [
"remove-header": value1"
]
...
Combining Filters
Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
add:
- name: "add-header-1"
value: "foo"
set:
- name: "set-header-1"
value: "bar"
remove:
- "removed-header"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
add:
- name: "add-header-1"
value: "foo"
set:
- name: "set-header-1"
value: "bar"
remove:
- "removed-header"
15 - HTTP Routing
The HTTPRoute resource allows users to configure HTTP routing by matching HTTP traffic and forwarding it to Kubernetes backends. Currently, the only supported backend supported by Envoy Gateway is a Service resource. This task shows how to route traffic based on host, header, and path fields and forward the traffic to different Kubernetes Services. To learn more about HTTP routing, refer to the Gateway API documentation.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Installation
Install the HTTP routing example resources:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/http-routing.yaml
The manifest installs a GatewayClass, Gateway, four Deployments, four Services, and three HTTPRoute resources. The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.
Note: Envoy Gateway is configured by default to manage a GatewayClass with
controllerName: gateway.envoyproxy.io/gatewayclass-controller
.
Verification
Check the status of the GatewayClass:
kubectl get gc --selector=example=http-routing
The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.
A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is
provisioned or configured by Envoy Gateway. The gatewayClassName
defines the name of a GatewayClass used by this
Gateway. Check the status of the Gateway:
kubectl get gateways --selector=example=http-routing
The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also provides the address of the Gateway. This address is used later to test connectivity to proxied backend services.
The three HTTPRoute resources create routing rules on the Gateway. In order to receive traffic from a Gateway,
an HTTPRoute must be configured with parentRefs
which reference the parent Gateway(s) that it should be attached to.
An HTTPRoute can match against a single set of hostnames. These hostnames are matched before any other matching
within the HTTPRoute takes place. Since example.com
, foo.example.com
, and bar.example.com
are separate hosts with
different routing requirements, each is deployed as its own HTTPRoute - example-route, ``foo-route
, and bar-route
.
Check the status of the HTTPRoutes:
kubectl get httproutes --selector=example=http-routing -o yaml
The status for each HTTPRoute should surface “Accepted=True” and a parentRef
that references the example Gateway.
The example-route
matches any traffic for “example.com” and forwards it to the “example-svc” Service.
Testing the Configuration
Before testing HTTP routing to the example-svc
backend, get the Gateway’s address.
export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')
Test HTTP routing to the example-svc
backend.
curl -vvv --header "Host: example.com" "http://${GATEWAY_HOST}/"
A 200
status code should be returned and the body should include "pod": "example-backend-*"
indicating the traffic
was routed to the example backend service. If you change the hostname to a hostname not represented in any of the
HTTPRoutes, e.g. “www.example.com”, the HTTP traffic will not be routed and a 404
should be returned.
The foo-route
matches any traffic for foo.example.com
and applies its routing rules to forward the traffic to the
“foo-svc” Service. Since there is only one path prefix match for /login
, only foo.example.com/login/*
traffic will
be forwarded. Test HTTP routing to the foo-svc
backend.
curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/login"
A 200
status code should be returned and the body should include "pod": "foo-backend-*"
indicating the traffic
was routed to the foo backend service. Traffic to any other paths that do not begin with /login
will not be matched by
this HTTPRoute. Test this by removing /login
from the request.
curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/"
The HTTP traffic will not be routed and a 404
should be returned.
Similarly, the bar-route
HTTPRoute matches traffic for bar.example.com
. All traffic for this hostname will be
evaluated against the routing rules. The most specific match will take precedence which means that any traffic with the
env:canary
header will be forwarded to bar-svc-canary
and if the header is missing or not canary
then it’ll be
forwarded to bar-svc
. Test HTTP routing to the bar-svc
backend.
curl -vvv --header "Host: bar.example.com" "http://${GATEWAY_HOST}/"
A 200
status code should be returned and the body should include "pod": "bar-backend-*"
indicating the traffic
was routed to the foo backend service.
Test HTTP routing to the bar-canary-svc
backend by adding the env: canary
header to the request.
curl -vvv --header "Host: bar.example.com" --header "env: canary" "http://${GATEWAY_HOST}/"
A 200
status code should be returned and the body should include "pod": "bar-canary-backend-*"
indicating the
traffic was routed to the foo backend service.
JWT Claims Based Routing
Users can route to a specific backend by matching on JWT claims. This can be achieved, by defining a SecurityPolicy with a jwt configuration that does the following
- Converts jwt claims to headers, which can be used for header based routing
- Sets the recomputeRoute field to
true
. This is required so that the incoming request matches on a fallback/catch all route where the JWT can be authenticated, the claims from the JWT can be converted to headers, and then the route match can be recomputed to match based on the updated headers.
For this feature to work please make sure
- you have a fallback route rule defined, the backend for this route rule can be invalid.
- The SecurityPolicy is applied to both the fallback route as well as the route with the claim header matches, to avoid spoofing.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: jwt-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: jwt-claim-routing
jwt:
providers:
- name: example
recomputeRoute: true
claimToHeaders:
- claim: sub
header: x-sub
- claim: admin
header: x-admin
- claim: name
header: x-name
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: jwt-claim-routing
spec:
parentRefs:
- name: eg
rules:
- backendRefs:
- kind: Service
name: foo-svc
port: 8080
weight: 1
matches:
- headers:
- name: x-name
value: John Doe
- backendRefs:
- kind: Service
name: bar-svc
port: 8080
weight: 1
matches:
- headers:
- name: x-name
value: Tom
# catch all
- backendRefs:
- kind: Service
name: infra-backend-invalid
port: 8080
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: jwt-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: jwt-claim-routing
jwt:
providers:
- name: example
recomputeRoute: true
claimToHeaders:
- claim: sub
header: x-sub
- claim: admin
header: x-admin
- claim: name
header: x-name
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: jwt-claim-routing
spec:
parentRefs:
- name: eg
rules:
- backendRefs:
- kind: Service
name: foo-svc
port: 8080
weight: 1
matches:
- headers:
- name: x-name
value: John Doe
- backendRefs:
- kind: Service
name: bar-svc
port: 8080
weight: 1
matches:
- headers:
- name: x-name
value: Tom
# catch all
- backendRefs:
- kind: Service
name: infra-backend-invalid
port: 8080
weight: 1
matches:
- path:
type: PathPrefix
value: /
Get the JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
Test routing to the foo-svc
backend by specifying a JWT Token with a claim name: John Doe
.
curl -sS -H "Host: foo.example.com" -H "Authorization: Bearer $TOKEN" "http://${GATEWAY_HOST}/login" | jq .pod
"foo-backend-6df8cc6b9f-fmwcg"
Get another JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/with-different-claim.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
Test HTTP routing to the bar-svc
backend by specifying a JWT Token with a claim name: Tom
.
curl -sS -H "Host: bar.example.com" -H "Authorization: Bearer $TOKEN" "http://${GATEWAY_HOST}/" | jq .pod
"bar-backend-6688b8944c-s8htr"
16 - HTTP Timeouts
The default request timeout is set to 15 seconds in Envoy Proxy. The HTTPRouteTimeouts resource allows users to configure request timeouts for an HTTPRouteRule. This task shows you how to configure timeouts.
The HTTPRouteTimeouts supports two kinds of timeouts:
- request: Request specifies the maximum duration for a gateway to respond to an HTTP request.
- backendRequest: BackendRequest specifies a timeout for an individual request from the gateway to a backend.
Note: The Request duration must be >= BackendRequest duration
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Verification
backend has the ability to delay responses; we use it as the backend to control response time.
request timeout
We configure the backend to delay responses by 3 seconds, then we set the request timeout to 4 seconds. Envoy Gateway will successfully respond to the request.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
hostnames:
- timeout.example.com
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
timeouts:
request: "4s"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
hostnames:
- timeout.example.com
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
timeouts:
request: "4s"
curl --header "Host: timeout.example.com" http://${GATEWAY_HOST}/?delay=3s -I
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 04 Mar 2024 02:34:21 GMT
content-length: 480
Then we set the request timeout to 2 seconds. In this case, Envoy Gateway will respond with a timeout.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
hostnames:
- timeout.example.com
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
timeouts:
request: "2s"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
hostnames:
- timeout.example.com
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
timeouts:
request: "2s"
curl --header "Host: timeout.example.com" http://${GATEWAY_HOST}/?delay=3s -v
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET /?delay=3s HTTP/1.1
> Host: timeout.example.com
> User-Agent: curl/8.6.0
> Accept: */*
>
< HTTP/1.1 504 Gateway Timeout
< content-length: 24
< content-type: text/plain
< date: Mon, 04 Mar 2024 02:35:03 GMT
<
* Connection #0 to host 127.0.0.1 left intact
upstream request timeout
17 - HTTP URL Rewrite
HTTPURLRewriteFilter defines a filter that modifies a request during forwarding. At most one of these filters may be used on a Route rule. This MUST NOT be used on the same Route rule as a HTTPRequestRedirect filter.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Rewrite URL Prefix Path
You can configure to rewrite the prefix in the url like below. In this example, any curls to
http://${GATEWAY_HOST}/get/xxx
will be rewritten to http://${GATEWAY_HOST}/replace/xxx
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
value: "/get"
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /replace
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
value: "/get"
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /replace
backendRefs:
- name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-url-rewrite -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying http://${GATEWAY_HOST}/get/origin/path
should rewrite to
http://${GATEWAY_HOST}/replace/origin/path
.
$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path"
...
> GET /get/origin/path HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:03:28 GMT
< content-length: 503
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/replace/origin/path",
"host": "path.rewrite.example",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Original-Path": [
"/get/origin/path"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"fd84b842-9937-4fb5-83c7-61470d854b90"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-6fdd4b9bd8-8vlc5"
...
You can see that the X-Envoy-Original-Path
is /get/origin/path
, but the actual path is /replace/origin/path
.
Rewrite URL Full Path
You can configure to rewrite the fullpath in the url like below. In this example, any request sent to
http://${GATEWAY_HOST}/get/origin/path/xxxx
will be rewritten to
http://${GATEWAY_HOST}/force/replace/fullpath
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/get/origin/path"
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /force/replace/fullpath
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/get/origin/path"
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /force/replace/fullpath
backendRefs:
- name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-url-rewrite -o yaml
Querying http://${GATEWAY_HOST}/get/origin/path/extra
should rewrite the request to
http://${GATEWAY_HOST}/force/replace/fullpath
.
$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path/extra"
...
> GET /get/origin/path/extra HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:09:31 GMT
< content-length: 512
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/force/replace/fullpath",
"host": "path.rewrite.example",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Original-Path": [
"/get/origin/path/extra"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"8ab774d6-9ffa-4faa-abbb-f45b0db00895"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-6fdd4b9bd8-8vlc5"
...
You can see that the X-Envoy-Original-Path
is /get/origin/path/extra
, but the actual path is
/force/replace/fullpath
.
Rewrite URL Path with Regex
In addition to core Gateway-API rewrite options, Envoy Gateway supports extended rewrite options through the HTTPRouteFilter API.
The HTTPRouteFilter
API can be configured to use RE2-compatible regex matchers and substitutions to rewrite a portion of the url.
In the example below, requests sent to http://${GATEWAY_HOST}/service/xxx/yyy
(where xxx
is a single path portion and yyy
is one or more path portions)
are rewritten to http://${GATEWAY_HOST}/yyy/instance/xxx
. The entire path is matched and rewritten using capture groups.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-regex-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.regex.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/"
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: regex-path-rewrite
backendRefs:
- name: backend
port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: regex-path-rewrite
spec:
urlRewrite:
path:
type: ReplaceRegexMatch
replaceRegexMatch:
pattern: '^/service/([^/]+)(/.*)$'
substitution: '\2/instance/\1'
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-regex-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.regex.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/"
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: regex-path-rewrite
backendRefs:
- name: backend
port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: regex-path-rewrite
spec:
urlRewrite:
path:
type: ReplaceRegexMatch
replaceRegexMatch:
pattern: '^/service/([^/]+)(/.*)$'
substitution: '\2/instance/\1'
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-url-regex-rewrite -o yaml
Querying http://${GATEWAY_HOST}/service/foo/v1/api
should rewrite the request to
http://${GATEWAY_HOST}/service/foo/v1/api
.
$ curl -L -vvv --header "Host: path.regex.rewrite.example" "http://${GATEWAY_HOST}/service/foo/v1/api"
...
> GET /service/foo/v1/api HTTP/1.1
> Host: path.regex.rewrite.example
> User-Agent: curl/8.7.1
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Mon, 16 Sep 2024 18:49:48 GMT
< content-length: 482
<
{
"path": "/v1/api/instance/foo",
"host": "path.regex.rewrite.example",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/8.7.1"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"10.244.0.37"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"24a5958f-1bfa-4694-a9c1-807d5139a18a"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-765694d47f-lzmpm"
...
You can see that the path is rewritten from /service/foo/v1/api
, to /v1/api/instance/foo
.
Rewrite Host Name
You can configure to rewrite the hostname like below. In this example, any requests sent to
http://${GATEWAY_HOST}/get
with --header "Host: path.rewrite.example"
will rewrite host into envoygateway.io
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/get"
filters:
- type: URLRewrite
urlRewrite:
hostname: "envoygateway.io"
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/get"
filters:
- type: URLRewrite
urlRewrite:
hostname: "envoygateway.io"
backendRefs:
- name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-url-rewrite -o yaml
Querying http://${GATEWAY_HOST}/get
with --header "Host: path.rewrite.example"
will rewrite host into
envoygateway.io
.
$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:15:15 GMT
< content-length: 481
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/get",
"host": "envoygateway.io",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Forwarded-Host": [
"path.rewrite.example"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"39aa447c-97b9-45a3-a675-9fb266ab1af0"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-6fdd4b9bd8-8vlc5"
...
You can see that the X-Forwarded-Host
is path.rewrite.example
, but the actual host is envoygateway.io
.
Rewrite URL Host Name by Header or Backend
In addition to core Gateway-API rewrite options, Envoy Gateway supports extended rewrite options through the HTTPRouteFilter API.
The HTTPRouteFilter
API can be configured to rewrite the Host header value to:
- The value of a different request header
- The DNS name of the backend that the request is routed to
In the following example, the host header is rewritten to the value of the x-custom-host header.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-hostname-header-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- host.header.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/header"
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: header-host-rewrite
backendRefs:
- name: backend
port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: header-host-rewrite
spec:
urlRewrite:
hostname:
type: Header
header: x-custom-host
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-hostname-header-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- host.header.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/header"
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: HTTPRouteFilter
name: header-host-rewrite
backendRefs:
- name: backend
port: 3000
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: HTTPRouteFilter
metadata:
name: header-host-rewrite
spec:
urlRewrite:
hostname:
type: Header
header: x-custom-host
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-header-host-rewrite -o yaml
Querying http://${GATEWAY_HOST}/header
and providing a custom host rewrite header x-custom-host should rewrite the
request host header to the value of the x-custom-host header.
$ curl -L -vvv --header "Host: host.header.rewrite.example" --header "x-custom-host: foo" "http://${GATEWAY_HOST}/header"
...
> GET /header HTTP/1.1
> Host: host.header.rewrite.example
> User-Agent: curl/8.7.1
> Accept: */*
> x-custom-host: foo
>
* Request completely sent off
< HTTP/1.1 200 OK
<
{
"path": "/header",
"host": "foo",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"X-Custom-Host": [
"foo"
],
"X-Forwarded-Host": [
"host.header.rewrite.example"
],
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-765694d47f-5t6f2"
...
You can see that the host is rewritten from host.header.rewrite.example
, to the value of the provided
x-custom-host
header foo
. The original host header is preserved in the X-Forwarded-Host
header.
18 - HTTP3
This task will help you get started using HTTP3 using EG. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
TLS Certificates
Generate the certificates and keys used by the Gateway to terminate client TLS connections.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for www.example.com
:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt
Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443
and references the
example-cert
Secret:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: example-cert
'
Apply the following ClientTrafficPolicy to enable HTTP3
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-http3
spec:
http3: {}
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-http3
spec:
http3: {}
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Testing
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Query the example app through the Gateway:
The below example uses a custom docker image with custom curl
binary with built-in http3.
docker run --net=host --rm ghcr.io/macbre/curl-http3 curl -kv --http3 -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" https://www.example.com/get
It is not possible at the moment to port-forward UDP protocol in kubernetes service check out https://github.com/kubernetes/kubernetes/issues/47862. Hence we need external loadbalancer to test this feature out.
19 - HTTPRoute Request Mirroring
The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams. It is possible to divide the traffic between these backends using Traffic Splitting, but it is also possible to mirror requests to another Service instead. Request mirroring is accomplished using Gateway API’s HTTPRequestMirrorFilter on the HTTPRoute
.
When requests are made to a HTTPRoute
that uses a HTTPRequestMirrorFilter
, the response will never come from the backendRef
defined in the filter. Responses from the mirror backendRef
are always ignored.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Mirroring the Traffic
Next, create a new Deployment
and Service
to mirror requests to. The following example will use
a second instance of the application deployed in the quickstart.
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend-2
---
apiVersion: v1
kind: Service
metadata:
name: backend-2
labels:
app: backend-2
service: backend-2
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
serviceAccountName: backend-2
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
EOF
Save and apply the following resources to your cluster:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend-2
---
apiVersion: v1
kind: Service
metadata:
name: backend-2
labels:
app: backend-2
service: backend-2
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
serviceAccountName: backend-2
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
HTTPRoute
that uses a HTTPRequestMirrorFilter
to send requests to the original
service from the quickstart, and mirror request to the service that was just deployed.cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-mirror -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying backends.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate which pod handled the request. There is only one pod in the deployment for the example app
from the quickstart, so it will be the same on all subsequent requests.
$ curl -v --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-79665566f5-s589f"
...
Check the logs of the pods and you will see that the original deployment and the new deployment each got a request:
$ kubectl logs deploy/backend && kubectl logs deploy/backend-2
...
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:41566)
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:45096)
Multiple BackendRefs
When an HTTPRoute
has multiple backendRefs
and an HTTPRequestMirrorFilter
, traffic splitting will still behave the same as it normally would for the main backendRefs
while the backendRef
of the HTTPRequestMirrorFilter
will continue receiving mirrored copies of the incoming requests.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
- group: ""
kind: Service
name: backend-3
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
- group: ""
kind: Service
name: backend-3
port: 3000
Multiple HTTPRequestMirrorFilters
Multiple HTTPRequestMirrorFilters
are not supported on the same HTTPRoute
rule
. When attempting to do so, the admission webhook will reject the configuration.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-3
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-3
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
Error from server: error when creating "STDIN": admission webhook "validate.gateway.networking.k8s.io" denied the request: spec.rules[0].filters: Invalid value: "RequestMirror": cannot be used multiple times in the same rule
20 - HTTPRoute Traffic Splitting
The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams
if they match the rules of the HTTPRoute. If an invalid backendRef is configured, then HTTP responses will be returned
with status code 500
for all requests that would have been sent to that backend.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Single backendRef
When a single backendRef is configured in a HTTPRoute, it will receive 100% of the traffic.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-headers -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying backends.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate which pod handled the request. There is only one pod in the deployment for the example app
from the quickstart, so it will be the same on all subsequent requests.
$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-79665566f5-s589f"
...
Multiple backendRefs
If multiple backendRefs are configured, then traffic will be split between the backendRefs equally unless a weight is configured.
First, create a second instance of the example app from the quickstart:
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend-2
---
apiVersion: v1
kind: Service
metadata:
name: backend-2
labels:
app: backend-2
service: backend-2
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
serviceAccountName: backend-2
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
EOF
Save and apply the following resources to your cluster:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend-2
---
apiVersion: v1
kind: Service
metadata:
name: backend-2
labels:
app: backend-2
service: backend-2
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
serviceAccountName: backend-2
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
Then create an HTTPRoute that uses both the app from the quickstart and the second instance that was just created
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
- group: ""
kind: Service
name: backend-2
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
- group: ""
kind: Service
name: backend-2
port: 3000
Querying backends.example/get
should result in 200
responses from the example Gateway and the output from the
example app that indicates which pod handled the request should switch between the first pod and the second one from the
new deployment on subsequent requests.
$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-75bcd4c969-lsxpz"
...
Weighted backendRefs
If multiple backendRefs are configured and an un-even traffic split between the backends is desired, then the weight
field can be used to control the weight of requests to each backend. If weight is not configured for a backendRef it is
assumed to be 1
.
The weight field in a backendRef controls the distribution of the traffic split. The proportion of
requests to a single backendRef is calculated by dividing its weight
by the sum of all backendRef weights in the
HTTPRoute. The weight is not a percentage and the sum of all weights does not need to add up to 100.
The HTTPRoute below will configure the gateway to send 80% of the traffic to the backend service, and 20% to the backend-2 service.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 8
- group: ""
kind: Service
name: backend-2
port: 3000
weight: 2
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 8
- group: ""
kind: Service
name: backend-2
port: 3000
weight: 2
Invalid backendRefs
backendRefs can be considered invalid for the following reasons:
- The
group
field is configured to something other than""
. Currently, only the core API group (specified by omitting the group field or setting it to an empty string) is supported - The
kind
field is configured to anything other thanService
. Envoy Gateway currently only supports Kubernetes Service backendRefs - The backendRef configures a service with a
namespace
not permitted by any existing ReferenceGrants - The
port
field is not configured or is configured to a port that does not exist on the Service - The named Service configured by the backendRef cannot be found
Modifying the above example to make the backend-2 backendRef invalid by using a port that does not exist on the Service
will result in 80% of the traffic being sent to the backend service, and 20% of the traffic receiving an HTTP response
with status code 500
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 8
- group: ""
kind: Service
name: backend-2
port: 9000
weight: 2
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 8
- group: ""
kind: Service
name: backend-2
port: 9000
weight: 2
Querying backends.example/get
should result in 200
responses 80% of the time, and 500
responses 20% of the time.
$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 500 Internal Server Error
< server: envoy
< content-length: 0
<
21 - Load Balancing
Envoy load balancing is a way of distributing traffic between multiple hosts within a single upstream cluster in order to effectively make use of available resources.
Envoy Gateway supports the following load balancing policies:
- Round Robin: a simple policy in which each available upstream host is selected in round robin order.
- Random: load balancer selects a random available host.
- Least Request: load balancer uses different algorithms depending on whether hosts have the same or different weights.
- Consistent Hash: load balancer implements consistent hashing to upstream hosts.
Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired load balancing polices.
This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource. If loadBalancer
is not specified in BackendTrafficPolicy, the default load balancing policy is Least Request
.
Prerequisites
Install Envoy Gateway
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
For better testing the load balancer, you can add more hosts in upstream cluster by increasing the replicas of one deployment:
kubectl patch deployment backend -n default -p '{"spec": {"replicas": 4}}'
Install the hey load testing tool
Install the Hey
CLI tool, this tool will be used to generate load and measure response times.
Follow the installation instruction from the Hey project docs.
Round Robin
This example will create a Load Balancer with Round Robin policy via BackendTrafficPolicy.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: round-robin-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: round-robin-route
loadBalancer:
type: RoundRobin
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: round-robin-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /round
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: round-robin-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: round-robin-route
loadBalancer:
type: RoundRobin
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: round-robin-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /round
backendRefs:
- name: backend
port: 3000
The hey
tool will be used to generate 100 concurrent requests.
hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/round
Summary:
Total: 0.0487 secs
Slowest: 0.0440 secs
Fastest: 0.0181 secs
Average: 0.0307 secs
Requests/sec: 2053.1676
Total data: 50500 bytes
Size/request: 505 bytes
Response time histogram:
0.018 [1] |■■
0.021 [2] |■■■■
0.023 [10] |■■■■■■■■■■■■■■■■■■■■■■
0.026 [16] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.028 [7] |■■■■■■■■■■■■■■■■
0.031 [10] |■■■■■■■■■■■■■■■■■■■■■■
0.034 [17] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.036 [18] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.039 [11] |■■■■■■■■■■■■■■■■■■■■■■■■
0.041 [6] |■■■■■■■■■■■■■
0.044 [2] |■■■■
As a result, you can see all available upstream hosts receive traffics evenly.
kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-2gfp7: received 26 requests
backend-69fcff487f-69g8c: received 25 requests
backend-69fcff487f-bqwpr: received 24 requests
backend-69fcff487f-kbn8l: received 25 requests
You should note that this results may vary, the output here is for reference purpose only.
Random
This example will create a Load Balancer with Random policy via BackendTrafficPolicy.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: random-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: random-route
loadBalancer:
type: Random
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: random-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /random
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: random-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: random-route
loadBalancer:
type: Random
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: random-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /random
backendRefs:
- name: backend
port: 3000
The hey
tool will be used to generate 1000 concurrent requests.
hey -n 1000 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/random
Summary:
Total: 0.2624 secs
Slowest: 0.0851 secs
Fastest: 0.0007 secs
Average: 0.0179 secs
Requests/sec: 3811.3020
Total data: 506000 bytes
Size/request: 506 bytes
Response time histogram:
0.001 [1] |
0.009 [421] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.018 [219] |■■■■■■■■■■■■■■■■■■■■■
0.026 [118] |■■■■■■■■■■■
0.034 [64] |■■■■■■
0.043 [73] |■■■■■■■
0.051 [41] |■■■■
0.060 [22] |■■
0.068 [19] |■■
0.077 [13] |■
0.085 [9] |■
As a result, you can see all available upstream hosts receive traffics randomly.
kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-bf6lm: received 246 requests
backend-69fcff487f-gwmqk: received 256 requests
backend-69fcff487f-mzngr: received 230 requests
backend-69fcff487f-xghqq: received 268 requests
You should note that this results may vary, the output here is for reference purpose only.
Least Request
This example will create a Load Balancer with Least Request policy via BackendTrafficPolicy.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: least-request-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: least-request-route
loadBalancer:
type: LeastRequest
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: least-request-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /least
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: least-request-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: least-request-route
loadBalancer:
type: LeastRequest
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: least-request-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /least
backendRefs:
- name: backend
port: 3000
The hey
tool will be used to generate 100 concurrent requests.
hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/least
Summary:
Total: 0.0489 secs
Slowest: 0.0479 secs
Fastest: 0.0054 secs
Average: 0.0297 secs
Requests/sec: 2045.9317
Total data: 50500 bytes
Size/request: 505 bytes
Response time histogram:
0.005 [1] |■■
0.010 [1] |■■
0.014 [8] |■■■■■■■■■■■■■■■
0.018 [6] |■■■■■■■■■■■
0.022 [11] |■■■■■■■■■■■■■■■■■■■■
0.027 [7] |■■■■■■■■■■■■■
0.031 [15] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.035 [13] |■■■■■■■■■■■■■■■■■■■■■■■■
0.039 [22] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.044 [12] |■■■■■■■■■■■■■■■■■■■■■■
0.048 [4] |■■■■■■■
As a result, you can see all available upstream hosts receive traffics randomly,
and host backend-69fcff487f-6l2pw
receives fewer requests than others.
kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-59hvs: received 24 requests
backend-69fcff487f-6l2pw: received 19 requests
backend-69fcff487f-ktsx4: received 30 requests
backend-69fcff487f-nqxc7: received 27 requests
If you send one more requests to the ${GATEWAY_HOST}/least
, you can tell that host backend-69fcff487f-6l2pw
is very likely
to get the attention of load balancer and receive this request.
backend-69fcff487f-59hvs: received 24 requests
backend-69fcff487f-6l2pw: received 20 requests
backend-69fcff487f-ktsx4: received 30 requests
backend-69fcff487f-nqxc7: received 27 requests
You should note that this results may vary, the output here is for reference purpose only.
Consistent Hash
This example will create a Load Balancer with Consistent Hash policy via BackendTrafficPolicy.
The underlying consistent hash algorithm that Envoy Gateway utilise is Maglev, and it can derive hash from following aspects:
- SourceIP
- Header
- Cookie
They are also the supported value as consistent hash type.
Source IP
This example will create a Load Balancer with Source IP based Consistent Hash policy.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: source-ip-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: source-ip-route
loadBalancer:
type: ConsistentHash
consistentHash:
type: SourceIP
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: source-ip-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /source
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: source-ip-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: source-ip-route
loadBalancer:
type: ConsistentHash
consistentHash:
type: SourceIP
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: source-ip-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /source
backendRefs:
- name: backend
port: 3000
The hey
tool will be used to generate 100 concurrent requests.
hey -n 100 -c 100 -host "www.example.com" http://${GATEWAY_HOST}/source
Summary:
Total: 0.0539 secs
Slowest: 0.0500 secs
Fastest: 0.0198 secs
Average: 0.0340 secs
Requests/sec: 1856.5666
Total data: 50600 bytes
Size/request: 506 bytes
Response time histogram:
0.020 [1] |■■
0.023 [5] |■■■■■■■■■■■
0.026 [12] |■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.029 [16] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.032 [11] |■■■■■■■■■■■■■■■■■■■■■■■■
0.035 [7] |■■■■■■■■■■■■■■■■
0.038 [8] |■■■■■■■■■■■■■■■■■■
0.041 [18] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.044 [15] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.047 [4] |■■■■■■■■■
0.050 [3] |■■■■■■■
As a result, you can see all traffics are routed to only one upstream host, since the client that send requests has the same source IP.
kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-grzkj: received 0 requests
backend-69fcff487f-n4d8w: received 100 requests
backend-69fcff487f-tb7zx: received 0 requests
backend-69fcff487f-wbzpg: received 0 requests
You can try different client to send out these requests, the upstream host that receives traffics may vary.
Header
This example will create a Load Balancer with Header based Consistent Hash policy.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: header-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: header-route
loadBalancer:
type: ConsistentHash
consistentHash:
type: Header
header:
name: FooBar
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: header-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /header
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: header-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: header-route
loadBalancer:
type: ConsistentHash
consistentHash:
type: Header
header:
name: FooBar
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: header-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /header
backendRefs:
- name: backend
port: 3000
The hey
tool will be used to generate 100 concurrent requests.
hey -n 100 -c 100 -host "www.example.com" -H "FooBar: 1.2.3.4" http://${GATEWAY_HOST}/header
Summary:
Total: 0.0579 secs
Slowest: 0.0510 secs
Fastest: 0.0323 secs
Average: 0.0431 secs
Requests/sec: 1728.6064
Total data: 53800 bytes
Size/request: 538 bytes
Response time histogram:
0.032 [1] |■■
0.034 [3] |■■■■■■
0.036 [1] |■■
0.038 [1] |■■
0.040 [7] |■■■■■■■■■■■■■■
0.042 [20] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.044 [20] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.045 [20] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.047 [16] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.049 [9] |■■■■■■■■■■■■■■■■■■
0.051 [2] |■■■■
As a result, you can see all traffics are routed to only one upstream host, since the header of all requests are the same.
kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-dvt9r: received 0 requests
backend-69fcff487f-f8qdl: received 100 requests
backend-69fcff487f-gnpm4: received 0 requests
backend-69fcff487f-t2pgm: received 0 requests
You can try to add different header to these requests, and the upstream host that receives traffics may vary.
The following output happens when you use hey
to send another 100 requests with header FooBar: 5.6.7.8
.
backend-69fcff487f-dvt9r: received 0 requests
backend-69fcff487f-f8qdl: received 100 requests
backend-69fcff487f-gnpm4: received 100 requests
backend-69fcff487f-t2pgm: received 0 requests
Cookie
This example will create a Load Balancer with Cookie based Consistent Hash policy.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: cookie-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: cookie-route
loadBalancer:
type: ConsistentHash
consistentHash:
type: Cookie
cookie:
name: FooBar
ttl: 60s
attributes:
SameSite: Strict
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: cookie-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /cookie
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: cookie-policy
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: cookie-route
loadBalancer:
type: ConsistentHash
consistentHash:
type: Cookie
cookie:
name: FooBar
ttl: 60s
attributes:
SameSite: Strict
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: cookie-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /cookie
backendRefs:
- name: backend
port: 3000
By sending 10 request with curl
to the ${GATEWAY_HOST}/cookie
, you can see that all requests got routed to only
one upstream host, since they have same cookie setting.
for i in {1..10}; do curl -I --header "Host: www.example.com" --cookie "FooBar=1.2.3.4" http://${GATEWAY_HOST}/cookie ; sleep 1; done
kubectl get pods -l app=backend --no-headers -o custom-columns=":metadata.name" | while read -r pod; do echo "$pod: received $(($(kubectl logs $pod | wc -l) - 2)) requests"; done
backend-69fcff487f-5dxz9: received 0 requests
backend-69fcff487f-gpvl2: received 0 requests
backend-69fcff487f-pglgv: received 10 requests
backend-69fcff487f-qxr74: received 0 requests
You can try to set different cookie to these requests, the upstream host that receives traffics may vary.
The following output happens when you use curl
to send another 10 requests with cookie FooBar: 5.6.7.8
.
backend-69fcff487f-dvt9r: received 0 requests
backend-69fcff487f-f8qdl: received 0 requests
backend-69fcff487f-gnpm4: received 10 requests
backend-69fcff487f-t2pgm: received 10 requests
If the cookie has not been set in one request, Envoy Gateway will auto-generate a cookie for this request
according to the ttl
and attributes
field.
In this example, the following cookie will be generated (see set-cookie
header in response) if sending a request without cookie:
curl -v --header "Host: www.example.com" http://${GATEWAY_HOST}/cookie
> GET /cookie HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.74.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Fri, 19 Jul 2024 16:49:57 GMT
< content-length: 458
< set-cookie: FooBar="88358b9442700c56"; Max-Age=60; SameSite=Strict; HttpOnly
<
{
"path": "/cookie",
"host": "www.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.74.0"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"10.244.0.1"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"1adeaaf7-d45c-48c8-9a4d-eadbccb2fd50"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-69fcff487f-5dxz9"
22 - Local Rate Limit
Rate limit is a feature that allows the user to limit the number of incoming requests to a predefined value based on attributes within the traffic flow.
Here are some reasons why you may want to implement Rate limits
- To prevent malicious activity such as DDoS attacks.
- To prevent applications and its resources (such as a database) from getting overloaded.
- To create API limits based on user entitlements.
Envoy Gateway supports two types of rate limiting: Global rate limiting and Local rate limiting.
Local rate limiting applies rate limits to the traffic flowing through a single instance of Envoy proxy. This means that if the data plane has 2 replicas of Envoy running, and the rate limit is 10 requests/second, each replica will allow 10 requests/second. This is in contrast to Global Rate Limiting which applies rate limits to the traffic flowing through all instances of Envoy proxy.
Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their rate limit intent. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Note: Limit is applied per route. Even if a BackendTrafficPolicy targets a gateway, each route in that gateway still has a separate rate limit bucket. For example, if a gateway has 2 routes, and the limit is 100r/s, then each route has its own 100r/s rate limit bucket.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Rate Limit Specific User
Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id
header
with a value set to one
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Local
local:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
limit:
requests: 3
unit: Hour
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Local
local:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
limit:
requests: 3
unit: Hour
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-ratelimit -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Let’s query ratelimit.example/get
4 times. We should receive a 200
response from the example Gateway for the first 3 requests
and then receive a 429
status code for the 4th request since the limit is set at 3 requests/Hour for the request which contains the header x-user-id
and value one
.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
You should be able to send requests with the x-user-id
header and a different value and receive successful responses from the server.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:36 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:37 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:38 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:39 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
Rate Limit Specific User Unless within Test Org
Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id
header
with a value set to one
. But the user must not be limited if logging in within Test org, determined by custom header x-org-id
set to test
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Local
local:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
- name: x-org-id
value: test
invert: true
limit:
requests: 3
unit: Hour
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Local
local:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
- name: x-org-id
value: test
invert: true
limit:
requests: 3
unit: Hour
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-ratelimit -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Let’s query ratelimit.example/get
4 times with x-user-id
set to one
and x-org-id
set to org1
. We should receive a 200
response from the example Gateway for the first 3 requests and the last request should be rate limited.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" --header "x-org-id: org1" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
Let’s query ratelimit.example/get
4 times with x-user-id
set to one
and x-org-id
set to test
. We should receive a 200
response from the example Gateway for all the 4 requests, unlike previous example where the last request was rate limited.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" --header "x-org-id: test" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
Rate Limit All Requests
This example shows you how to rate limit all requests matching the HTTPRoute rule at 3 requests/Hour by leaving the clientSelectors
field unset.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Local
local:
rules:
- limit:
requests: 3
unit: Hour
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
rateLimit:
type: Local
local:
rules:
- limit:
requests: 3
unit: Hour
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
Note: Local rate limiting does not support distinct
matching. If you want to rate limit based on distinct values,
you should use Global Rate Limiting.
23 - Multicluster Service Routing
The Multicluster Service API ServiceImport object can be used as part of the GatewayAPI backendRef for configuring routes. For more information about multicluster service API follow sig documentation.
We will use Submariner project for setting up the multicluster environment for exporting the service to be routed from peer clusters.
Setting KIND clusters and installing Submariner.
- We will be using KIND clusters to demonstrate this example.
git clone https://github.com/submariner-io/submariner-operator
cd submariner-operator
make clusters
Note: remain in submariner-operator directory for the rest of the steps in this section
- Install subctl:
curl -Ls https://get.submariner.io | VERSION=v0.14.6 bash
- Set up multicluster service API and submariner for cross cluster traffic using ServiceImport
subctl deploy-broker --kubeconfig output/kubeconfigs/kind-config-cluster1 --globalnet
subctl join --kubeconfig output/kubeconfigs/kind-config-cluster1 broker-info.subm --clusterid cluster1 --natt=false
subctl join --kubeconfig output/kubeconfigs/kind-config-cluster2 broker-info.subm --clusterid cluster2 --natt=false
Once the above steps are done and all the pods are up in both the clusters. We are ready for installing envoy gateway.
Install EnvoyGateway
Install the Gateway API CRDs and Envoy Gateway in cluster1:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace --kubeconfig output/kubeconfigs/kind-config-cluster1
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available --kubeconfig output/kubeconfigs/kind-config-cluster1
Install Application
Install the backend application in cluster2 and export it through subctl command.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/application.yaml --kubeconfig output/kubeconfigs/kind-config-cluster2
subctl export service backend --namespace default --kubeconfig output/kubeconfigs/kind-config-cluster2
Create Gateway API Objects
Create the Gateway API objects GatewayClass, Gateway and HTTPRoute in cluster1 to set up the routing.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/multicluster-service.yaml --kubeconfig output/kubeconfigs/kind-config-cluster1
Testing the Configuration
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8888:80 &
Curl the example app through Envoy proxy:
curl --verbose --header "Host: www.example.com" http://localhost:8888/get
24 - Response Override
Response Override allows you to override the response from the backend with a custom one. This can be useful for scenarios such as returning a custom 404 page when the requested resource is not found or a custom 500 error message when the backend is failing.
Installation
Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Testing Response Override
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: response-override
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
responseOverride:
- match:
statusCodes:
- type: Value
value: 404
response:
contentType: text/plain
body:
type: Inline
inline: "Oops! Your request is not found."
- match:
statusCodes:
- type: Value
value: 500
- type: Range
range:
start: 501
end: 511
response:
contentType: application/json
body:
type: ValueRef
valueRef:
group: ""
kind: ConfigMap
name: response-override-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: response-override-config
data:
response.body: '{"error": "Internal Server Error"}'
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: response-override
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
responseOverride:
- match:
statusCodes:
- type: Value
value: 404
response:
contentType: text/plain
body:
type: Inline
inline: "Oops! Your request is not found."
- match:
statusCodes:
- type: Value
value: 500
- type: Range
range:
start: 501
end: 511
response:
contentType: application/json
body:
type: ValueRef
valueRef:
group: ""
kind: ConfigMap
name: response-override-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: response-override-config
data:
response.body: '{"error": "Internal Server Error"}'
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/status/404
* Trying 127.0.0.1:80...
* Connected to 172.18.0.200 (172.18.0.200) port 80
> GET /status/404 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< content-type: text/plain
< content-length: 32
< date: Thu, 07 Nov 2024 09:22:29 GMT
<
* Connection #0 to host 172.18.0.200 left intact
Oops! Your request is not found.
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/status/500
* Trying 127.0.0.1:80...
* Connected to 172.18.0.200 (172.18.0.200) port 80
> GET /status/500 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.5.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< content-type: application/json
< content-length: 34
< date: Thu, 07 Nov 2024 09:23:02 GMT
<
* Connection #0 to host 172.18.0.200 left intact
{"error": "Internal Server Error"}
25 - Retry
A retry setting specifies the maximum number of times an Envoy proxy attempts to connect to a service if the initial call fails. Retries can enhance service availability and application performance by making sure that calls don’t fail permanently because of transient problems such as a temporarily overloaded service or network. The interval between retries prevents the called service from being overwhelmed with requests.
Envoy Gateway supports the following retry settings:
- NumRetries: is the number of retries to be attempted. Defaults to 2.
- RetryOn: specifies the retry trigger condition.
- PerRetryPolicy: is the retry policy to be applied per retry attempt.
Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their desired retry settings. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Note: There are distinct circuit breaker counters for each BackendReference
in an xRoute
rule. Even if a BackendTrafficPolicy
targets a Gateway
, each BackendReference
in that gateway still has separate circuit breaker counter.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Test and customize retry settings
Before applying a BackendTrafficPolicy
with retry setting to a route, let’s test the default retry settings.
curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/status/500"
It will return 500
response immediately.
* Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /status/500 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< date: Fri, 01 Mar 2024 15:12:55 GMT
< content-length: 0
<
* Connection #0 to host 172.18.255.200 left intact
Let’s create a BackendTrafficPolicy
with a retry setting.
The request will be retried 5 times with a 100ms base interval and a 10s maximum interval.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: retry-for-route
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
retry:
numRetries: 5
perRetry:
backOff:
baseInterval: 100ms
maxInterval: 10s
timeout: 250ms
retryOn:
httpStatusCodes:
- 500
triggers:
- connect-failure
- retriable-status-codes
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: retry-for-route
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
retry:
numRetries: 5
perRetry:
backOff:
baseInterval: 100ms
maxInterval: 10s
timeout: 250ms
retryOn:
httpStatusCodes:
- 500
triggers:
- connect-failure
- retriable-status-codes
Execute the test again.
curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/status/500"
It will return 500
response after a few while.
* Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /status/500 HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< date: Fri, 01 Mar 2024 15:15:53 GMT
< content-length: 0
<
* Connection #0 to host 172.18.255.200 left intact
Let’s check the stats to see the retry behavior.
egctl x stats envoy-proxy -n envoy-gateway-system -l gateway.envoyproxy.io/owning-gateway-name=eg,gateway.envoyproxy.io/owning-gateway-namespace=default | grep "envoy_cluster_upstream_rq_retry{envoy_cluster_name=\"httproute/default/backend/rule/0\"}"
You will expect to see the stats.
envoy_cluster_upstream_rq_retry{envoy_cluster_name="httproute/default/backend/rule/0"} 5
26 - Routing outside Kubernetes
Routing to endpoints outside the Kubernetes cluster where Envoy Gateway and its corresponding Envoy Proxy fleet is running is a common use case. This can be achieved by:
- defining FQDN addresses in a EndpointSlice (covered in this document)
- defining a Backend resource, as described in the Backend Task.
Installation
Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Configuration
Define a Service and EndpointSlice that represents https://httpbin.org
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: httpbin
namespace: default
spec:
ports:
- port: 443
protocol: TCP
targetPort: 443
name: https
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: httpbin
namespace: default
labels:
kubernetes.io/service-name: httpbin
addressType: FQDN
ports:
- name: https
protocol: TCP
port: 443
endpoints:
- addresses:
- "httpbin.org"
EOF
Save and apply the following resources to your cluster:
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
namespace: default
spec:
ports:
- port: 443
protocol: TCP
targetPort: 443
name: https
---
apiVersion: discovery.k8s.io/v1
kind: EndpointSlice
metadata:
name: httpbin
namespace: default
labels:
kubernetes.io/service-name: httpbin
addressType: FQDN
ports:
- name: https
protocol: TCP
port: 443
endpoints:
- addresses:
- "httpbin.org"
Update the Gateway to include a TLS Listener on port 443
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: tls
protocol: TLS
port: 443
tls:
mode: Passthrough
'
Add a TLSRoute that can route incoming traffic to the above backend that we created
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:
name: httpbin
spec:
parentRefs:
- name: eg
sectionName: tls
rules:
- backendRefs:
- name: httpbin
port: 443
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:
name: httpbin
spec:
parentRefs:
- name: eg
sectionName: tls
rules:
- backendRefs:
- name: httpbin
port: 443
Get the Gateway address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Send a request and view the response:
curl -I -HHost:httpbin.org --resolve "httpbin.org:443:${GATEWAY_HOST}" https://httpbin.org/
27 - TCP Routing
TCPRoute provides a way to route TCP requests. When combined with a Gateway listener, it can be used to forward connections on the port specified by the listener to a set of backends specified by the TCPRoute. To learn more about HTTP routing, refer to the Gateway API documentation.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Configuration
In this example, we have one Gateway resource and two TCPRoute resources that distribute the traffic with the following rules:
All TCP streams on port 8088
of the Gateway are forwarded to port 3001 of foo
Kubernetes Service.
All TCP streams on port 8089
of the Gateway are forwarded to port 3002 of bar
Kubernetes Service.
In this example two TCP listeners will be applied to the Gateway in order to route them to two separate backend
TCPRoutes, note that the protocol set for the listeners on the Gateway is TCP:
Install the GatewayClass and a tcp-gateway
Gateway first.
cat <<EOF | kubectl apply -f -
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: tcp-gateway
spec:
gatewayClassName: eg
listeners:
- name: foo
protocol: TCP
port: 8088
allowedRoutes:
kinds:
- kind: TCPRoute
- name: bar
protocol: TCP
port: 8089
allowedRoutes:
kinds:
- kind: TCPRoute
EOF
Save and apply the following resources to your cluster:
---
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: tcp-gateway
spec:
gatewayClassName: eg
listeners:
- name: foo
protocol: TCP
port: 8088
allowedRoutes:
kinds:
- kind: TCPRoute
- name: bar
protocol: TCP
port: 8089
allowedRoutes:
kinds:
- kind: TCPRoute
Install two services foo
and bar
, which are bound to backend-1
and backend-2
.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: foo
labels:
app: backend-1
spec:
ports:
- name: http
port: 3001
targetPort: 3000
selector:
app: backend-1
---
apiVersion: v1
kind: Service
metadata:
name: bar
labels:
app: backend-2
spec:
ports:
- name: http
port: 3002
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-1
spec:
replicas: 1
selector:
matchLabels:
app: backend-1
version: v1
template:
metadata:
labels:
app: backend-1
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-1
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_NAME
value: foo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_NAME
value: bar
EOF
Save and apply the following resources to your cluster:
---
apiVersion: v1
kind: Service
metadata:
name: foo
labels:
app: backend-1
spec:
ports:
- name: http
port: 3001
targetPort: 3000
selector:
app: backend-1
---
apiVersion: v1
kind: Service
metadata:
name: bar
labels:
app: backend-2
spec:
ports:
- name: http
port: 3002
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-1
spec:
replicas: 1
selector:
matchLabels:
app: backend-1
version: v1
template:
metadata:
labels:
app: backend-1
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-1
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_NAME
value: foo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-gateway-api/echo-basic:v20231214-v1.0.0-140-gf544a46e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_NAME
value: bar
Install two TCPRoutes tcp-app-1
and tcp-app-2
with different sectionName
:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: tcp-app-1
spec:
parentRefs:
- name: tcp-gateway
sectionName: foo
rules:
- backendRefs:
- name: foo
port: 3001
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: tcp-app-2
spec:
parentRefs:
- name: tcp-gateway
sectionName: bar
rules:
- backendRefs:
- name: bar
port: 3002
EOF
Save and apply the following resources to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: tcp-app-1
spec:
parentRefs:
- name: tcp-gateway
sectionName: foo
rules:
- backendRefs:
- name: foo
port: 3001
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: tcp-app-2
spec:
parentRefs:
- name: tcp-gateway
sectionName: bar
rules:
- backendRefs:
- name: bar
port: 3002
In the above example we separate the traffic for the two separate backend TCP Services by using the sectionName field in the parentRefs:
spec:
parentRefs:
- name: tcp-gateway
sectionName: foo
This corresponds directly with the name in the listeners in the Gateway:
listeners:
- name: foo
protocol: TCP
port: 8088
- name: bar
protocol: TCP
port: 8089
In this way each TCPRoute “attaches” itself to a different port on the Gateway so that the foo
service
is taking traffic for port 8088
from outside the cluster and bar
service takes the port 8089
traffic.
Before testing, please get the tcp-gateway Gateway’s address first:
export GATEWAY_HOST=$(kubectl get gateway/tcp-gateway -o jsonpath='{.status.addresses[0].value}')
You can try to use nc to test the TCP connections of envoy gateway with different ports, and you can see them succeeded:
nc -zv ${GATEWAY_HOST} 8088
nc -zv ${GATEWAY_HOST} 8089
You can also try to send requests to envoy gateway and get responses as shown below:
curl -i "http://${GATEWAY_HOST}:8088"
HTTP/1.1 200 OK
Content-Type: application/json
X-Content-Type-Options: nosniff
Date: Tue, 03 Jan 2023 10:18:36 GMT
Content-Length: 267
{
"path": "/",
"host": "xxx.xxx.xxx.xxx:8088",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
]
},
"namespace": "default",
"ingress": "",
"service": "foo",
"pod": "backend-1-c6c5fb958-dl8vl"
}
You can see that the traffic routing to foo
service when sending request to 8088
port.
curl -i "http://${GATEWAY_HOST}:8089"
HTTP/1.1 200 OK
Content-Type: application/json
X-Content-Type-Options: nosniff
Date: Tue, 03 Jan 2023 10:19:28 GMT
Content-Length: 267
{
"path": "/",
"host": "xxx.xxx.xxx.xxx:8089",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
]
},
"namespace": "default",
"ingress": "",
"service": "bar",
"pod": "backend-2-98fcff498-hcmgb"
}
You can see that the traffic routing to bar
service when sending request to 8089
port.
28 - UDP Routing
The UDPRoute resource allows users to configure UDP routing by matching UDP traffic and forwarding it to Kubernetes backends. This task will use CoreDNS example to walk you through the steps required to configure UDPRoute on Envoy Gateway.
Note: UDPRoute allows Envoy Gateway to operate as a non-transparent proxy between a UDP client and server. The lack of transparency means that the upstream server will see the source IP and port of the Gateway instead of the client. For additional information, refer to Envoy’s UDP proxy documentation.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Installation
Install CoreDNS in the Kubernetes cluster as the example backend. The installed CoreDNS is listening on UDP port 53 for DNS lookups.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/udp-routing-example-backend.yaml
Wait for the CoreDNS deployment to become available:
kubectl wait --timeout=5m deployment/coredns --for=condition=Available
Update the Gateway from the Quickstart to include a UDP listener that listens on UDP port 5300
:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: coredns
protocol: UDP
port: 5300
allowedRoutes:
kinds:
- kind: UDPRoute
'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Configuration
Create a UDPRoute resource to route UDP traffic received on Gateway port 5300 to the CoredDNS backend.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
name: coredns
spec:
parentRefs:
- name: eg
sectionName: coredns
rules:
- backendRefs:
- name: coredns
port: 53
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
name: coredns
spec:
parentRefs:
- name: eg
sectionName: coredns
rules:
- backendRefs:
- name: coredns
port: 53
Verify the UDPRoute status:
kubectl get udproute/coredns -o yaml
Testing
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Use dig
command to query the dns entry foo.bar.com through the Gateway.
dig @${GATEWAY_HOST} -p 5300 foo.bar.com
You should see the result of the dns query as the below output, which means that the dns query has been successfully routed to the backend CoreDNS.
Note: 49.51.177.138 is the resolved address of GATEWAY_HOST.
; <<>> DiG 9.18.1-1ubuntu1.1-Ubuntu <<>> @49.51.177.138 -p 5300 foo.bar.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58125
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 3
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 24fb86eba96ebf62 (echoed)
;; QUESTION SECTION:
;foo.bar.com. IN A
;; ADDITIONAL SECTION:
foo.bar.com. 0 IN A 10.244.0.19
_udp.foo.bar.com. 0 IN SRV 0 0 42376 .
;; Query time: 1 msec
;; SERVER: 49.51.177.138#5300(49.51.177.138) (UDP)
;; WHEN: Fri Jan 13 10:20:34 UTC 2023
;; MSG SIZE rcvd: 114
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway.
Delete the CoreDNS example manifest and the UDPRoute:
kubectl delete deploy/coredns
kubectl delete service/coredns
kubectl delete cm/coredns
kubectl delete udproute/coredns
Next Steps
Checkout the Developer Guide to get involved in the project.