This is the multi-page printable view of this section. Click here to print.
Security
- 1: Accelerating TLS Handshakes using Private Key Provider in Envoy
- 2: Backend Mutual TLS: Gateway to Backend
- 3: Backend TLS: Gateway to Backend
- 4: Basic Authentication
- 5: CORS
- 6: External Authorization
- 7: IP Allowlist/Denylist
- 8: JWT Authentication
- 9: JWT Claim-Based Authorization
- 10: Mutual TLS: External Clients to the Gateway
- 11: OIDC Authentication
- 12: Secure Gateways
- 13: Threat Model
- 14: TLS Passthrough
- 15: TLS Termination for TCP
- 16: Using cert-manager For TLS Termination
1 - Accelerating TLS Handshakes using Private Key Provider in Envoy
TLS operations can be accelerated or the private key can be protected using specialized hardware. This can be leveraged in Envoy using Envoy Private Key Provider is added to Envoy.
Today, there are two private key providers implemented in Envoy as contrib extensions:
Both of them are used to accelerate the TLS handshake through the hardware capabilities.
This task will walk you through the steps required to configure TLS Termination mode for TCP traffic while also using the Envoy Private Key Provider to accelerate the TLS handshake by leveraging QAT and the HW accelerator available on Intel SPR/EMR Xeon server platforms.
Prerequisites
Install Linux kernel 5.17 or similar
Ensure the node has QAT devices by checking the QAT physical function devices presented. Supported Devices
echo `(lspci -d 8086:4940 && lspci -d 8086:4941 && lspci -d 8086:4942 && lspci -d 8086:4943 && lspci -d 8086:4946 && lspci -d 8086:4947) | wc -l` supported devices found.
Enable IOMMU from BIOS
Enable IOMMU for Linux kernel
Figure out the QAT VF device id
lspci -d 8086:4941 && lspci -d 8086:4943 && lspci -d 8086:4947
Attach the QAT device to vfio-pci through kernel parameter by the device id gotten from previous command.
cat /etc/default/grub: GRUB_CMDLINE_LINUX="intel_iommu=on vfio-pci.ids=[QAT device id]" update-grub reboot
Once the system is rebooted, check if the IOMMU has been enabled via the following command:
dmesg| grep IOMMU [ 1.528237] DMAR: IOMMU enabled
Enable virtual function devices for QAT device
modprobe vfio_pci rmmod qat_4xxx modprobe qat_4xxx qat_device=$(lspci -D -d :[QAT device id] | awk '{print $1}') for i in $qat_device; do echo 16|sudo tee /sys/bus/pci/devices/$i/sriov_numvfs; done chmod a+rw /dev/vfio/*
Increase the container runtime memory lock limit (using the containerd as example here)
mkdir /etc/systemd/system/containerd.service.d cat <<EOF >>/etc/systemd/system/containerd.service.d/memlock.conf [Service] LimitMEMLOCK=134217728 EOF
Restart the container runtime (for containerd, CRIO has similar concept)
systemctl daemon-reload systemctl restart containerd
Install Intel® QAT Device Plugin for Kubernetes
kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/qat_plugin?ref=main'
Verification of the plugin deployment and detection of QAT hardware can be confirmed by examining the resource allocations on the nodes:
kubectl get node -o yaml| grep qat.intel.com
It required the node with 3rd generation Intel Xeon Scalable processor server processors, or later.
For kubernetes Cluster, if not all nodes that support Intel® AVX-512 in Kubernetes cluster, you need to add some labels to divide these two kinds of nodes manually or using NFD.
kubectl apply -k https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.15.1
Checking the available nodes with required cpu instructions:
Check the node labels if using NFD:
kubectl get nodes -l feature.node.kubernetes.io/cpu-cpuid.AVX512F,feature.node.kubernetes.io/cpu-cpuid.AVX512DQ,feature.node.kubernetes.io/cpu-cpuid.AVX512BW,feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2,feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA
Check CPUIDS manually on the node if without using NFD:
cat /proc/cpuinfo |grep avx512f|grep avx512dq|grep avx512bw|grep avx512_vbmi2|grep avx512ifma
Installation
Follow the steps from the Quickstart to install Envoy Gateway.
Enable the EnvoyPatchPolicy feature, which will allow us to directly configure the Private Key Provider Envoy Filter, since Envoy Gateway does not directly expose this functionality.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
extensionApis:
enableEnvoyPatchPolicy: true
EOF
Save and apply the following resource to your cluster:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
extensionApis:
enableEnvoyPatchPolicy: true
After updating the
ConfigMap
, you will need to wait the configuration kicks in.
You can force the configuration to be reloaded by restarting theenvoy-gateway
deployment.kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
Create gateway for TLS termination
Follow the instructions in TLS Termination for TCP to setup a TCP gateway to terminate the TLS connection.
Update GatewayClass for using the envoyproxy image with contrib extensions and requests required resources.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
Change EnvoyProxy configuration
Using the envoyproxy image with contrib extensions and add qat resources requesting, ensure the k8s scheduler find out a machine with required resource.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
concurrency: 1
provider:
type: Kubernetes
kubernetes:
envoyService:
type: NodePort
envoyDeployment:
container:
image: envoyproxy/envoy-contrib-dev:latest
resources:
requests:
cpu: 1000m
memory: 4096Mi
qat.intel.com/cy: '1'
limits:
cpu: 1000m
memory: 4096Mi
qat.intel.com/cy: '1'
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
concurrency: 1
provider:
type: Kubernetes
kubernetes:
envoyService:
type: NodePort
envoyDeployment:
container:
image: envoyproxy/envoy-contrib-dev:latest
resources:
requests:
cpu: 1000m
memory: 4096Mi
qat.intel.com/cy: '1'
limits:
cpu: 1000m
memory: 4096Mi
qat.intel.com/cy: '1'
Using the envoyproxy image with contrib extensions and add the node affinity to scheduling the Envoy Gateway pod on the machine with required CPU instructions.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
concurrency: 1
provider:
type: Kubernetes
kubernetes:
envoyService:
type: NodePort
envoyDeployment:
container:
image: envoyproxy/envoy-contrib-dev:latest
resources:
requests:
cpu: 1000m
memory: 4096Mi
limits:
cpu: 1000m
memory: 4096Mi
pod:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512F
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512DQ
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512BW
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2
operator: Exists
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
concurrency: 1
provider:
type: Kubernetes
kubernetes:
envoyService:
type: NodePort
envoyDeployment:
container:
image: envoyproxy/envoy-contrib-dev:latest
resources:
requests:
cpu: 1000m
memory: 4096Mi
limits:
cpu: 1000m
memory: 4096Mi
pod:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512F
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512DQ
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512BW
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512IFMA
operator: Exists
- key: feature.node.kubernetes.io/cpu-cpuid.AVX512VBMI2
operator: Exists
Or using preferredDuringSchedulingIgnoredDuringExecution
for best effort scheduling, or not doing any node affinity, just doing the random scheduling. The CryptoMB private key provider supports software fallback if the required CPU instructions aren’t here.
Benchmark before enabling private key provider
First follow the instructions in TLS Termination for TCP to do the functionality test.
Ensure the cpu frequency governor set as performance
.
export NUM_CPUS=`lscpu | grep "^CPU(s):"|awk '{print $2}'`
for i in `seq 0 1 $NUM_CPUS`; do sudo cpufreq-set -c $i -g performance; done
Using the nodeport as the example, fetch the node port from envoy gateway service.
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
export NODE_PORT=$(kubectl -n envoy-gateway-system get svc/$ENVOY_SERVICE -o jsonpath='{.spec.ports[0].nodePort}')
echo "127.0.0.1 www.example.com" >> /etc/hosts
Benchmark the gateway with fortio.
fortio load -c 10 -k -qps 0 -t 30s -keepalive=false https://www.example.com:${NODE_PORT}
Apply EnvoyPatchPolicy to enable private key provider
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
name: key-provider-patch-policy
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
type: JSONPatch
jsonPatches:
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: add
path: "/tls_certificate/private_key_provider"
value:
provider_name: qat
typed_config:
"@type": "type.googleapis.com/envoy.extensions.private_key_providers.qat.v3alpha.QatPrivateKeyMethodConfig"
private_key:
inline_string: |
abcd
poll_delay: 0.001s
fallback: true
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: copy
from: "/tls_certificate/private_key"
path: "/tls_certificate/private_key_provider/typed_config/private_key"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
name: key-provider-patch-policy
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
type: JSONPatch
jsonPatches:
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: add
path: "/tls_certificate/private_key_provider"
value:
provider_name: qat
typed_config:
"@type": "type.googleapis.com/envoy.extensions.private_key_providers.qat.v3alpha.QatPrivateKeyMethodConfig"
private_key:
inline_string: |
abcd
poll_delay: 0.001s
fallback: true
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: copy
from: "/tls_certificate/private_key"
path: "/tls_certificate/private_key_provider/typed_config/private_key"
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
name: key-provider-patch-policy
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
type: JSONPatch
jsonPatches:
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: add
path: "/tls_certificate/private_key_provider"
value:
provider_name: cryptomb
typed_config:
"@type": "type.googleapis.com/envoy.extensions.private_key_providers.cryptomb.v3alpha.CryptoMbPrivateKeyMethodConfig"
private_key:
inline_string: |
abcd
poll_delay: 0.001s
fallback: true
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: copy
from: "/tls_certificate/private_key"
path: "/tls_certificate/private_key_provider/typed_config/private_key"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
name: key-provider-patch-policy
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
type: JSONPatch
jsonPatches:
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: add
path: "/tls_certificate/private_key_provider"
value:
provider_name: cryptomb
typed_config:
"@type": "type.googleapis.com/envoy.extensions.private_key_providers.cryptomb.v3alpha.CryptoMbPrivateKeyMethodConfig"
private_key:
inline_string: |
abcd
poll_delay: 0.001s
fallback: true
- type: "type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret"
name: default/example-cert
operation:
op: copy
from: "/tls_certificate/private_key"
path: "/tls_certificate/private_key_provider/typed_config/private_key"
Benchmark after enabling private key provider
First follow the instructions in TLS Termination for TCP to do the functionality test again.
Benchmark the gateway with fortio.
fortio load -c 64 -k -qps 0 -t 30s -keepalive=false https://www.example.com:${NODE_PORT}
Benchmark Result
You will see a performance boost after private key provider enabled. For example, you will get results as below.
Without private key provider:
All done 43069 calls (plus 10 warmup) 6.966 ms avg, 1435.4 qps
With QAT private key provider, the QPS is over 3 times than without private key provider
All done 134746 calls (plus 128 warmup) 28.505 ms avg, 4489.6 qps
With CryptoMB private key provider, the QPS is over 2 times than without private key provider.
All done 93983 calls (plus 128 warmup) 40.880 ms avg, 3130.5 qps
2 - Backend Mutual TLS: Gateway to Backend
This task demonstrates how mTLS can be achieved between the Gateway and a backend. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.
Envoy Gateway supports the Gateway-API defined BackendTLSPolicy to establish TLS. For mTLS, the Gateway must authenticate by presenting a client certificate to the backend.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Backend TLS to install Envoy Gateway and configure TLS to the backend server.
TLS Certificates
Generate the certificates and keys used by the Gateway for authentication against the backend.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout clientca.key -out clientca.crt
Create a certificate and a private key for www.example.com
:
openssl req -new -newkey rsa:2048 -nodes -keyout client.key -out client.csr -subj "/CN=example-client/O=example organization"
openssl x509 -req -days 365 -CA clientca.crt -CAkey clientca.key -set_serial 0 -in client.csr -out client.crt
Store the cert/key in a Secret:
kubectl -n envoy-gateway-system create secret tls example-client-cert --key=client.key --cert=client.crt
Store the CA Cert in another Secret:
kubectl create configmap example-client-ca --from-file=clientca.crt
Enforce Client Certificate Authentication on the backend
Patch the existing quickstart backend to enforce Client Certificate Authentication. The patch will mount the server certificate and key required for TLS, and the CA certificate into the backend as volumes.
kubectl patch deployment backend --type=json --patch '
- op: add
path: /spec/template/spec/containers/0/volumeMounts
value:
- name: client-certs-volume
mountPath: /etc/client-certs
- name: secret-volume
mountPath: /etc/secret-volume
- op: add
path: /spec/template/spec/volumes
value:
- name: client-certs-volume
configMap:
name: example-client-ca
items:
- key: clientca.crt
path: crt
- name: secret-volume
secret:
secretName: example-cert
items:
- key: tls.crt
path: crt
- key: tls.key
path: key
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: TLS_CLIENT_CACERTS
value: /etc/client-certs/crt
'
Configure Envoy Proxy to use a client certificate
In addition to enablement of backend TLS with the Gateway-API BackendTLSPolicy, Envoy Gateway supports customizing TLS parameters such as TLS Client Certificate. To achieve this, the EnvoyProxy resource can be used to specify a TLS Client Certificate.
First, you need to add ParametersRef in GatewayClass, and refer to EnvoyProxy Config:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
backendTLS:
clientCertificateRef:
kind: Secret
name: example-client-cert
namespace: envoy-gateway-system
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
backendTLS:
clientCertificateRef:
kind: Secret
name: example-client-cert
namespace: envoy-gateway-system
Testing mTLS
Query the TLS-enabled backend through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:80:127.0.0.1" \
http://www.example.com:80/get
Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend. The response now contains a “peerCertificates” attribute that reflects the client certificate used by the Gateway to establish mTLS with the backend.
< HTTP/1.1 200 OK
[...]
"tls": {
"version": "TLSv1.2",
"serverName": "www.example.com",
"negotiatedProtocol": "http/1.1",
"cipherSuite": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
"peerCertificates": ["-----BEGIN CERTIFICATE-----\n[...]-----END CERTIFICATE-----\n"]
}
3 - Backend TLS: Gateway to Backend
This task demonstrates how TLS can be achieved between the Gateway and a backend. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.
Envoy Gateway supports the Gateway-API defined BackendTLSPolicy.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
TLS Certificates
Generate the certificates and keys used by the backend to terminate TLS connections from the Gateways.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout ca.key -out ca.crt
Create a certificate and a private key for www.example.com
.
First, create an openssl configuration file:
cat > openssl.conf <<EOF
[req]
req_extensions = v3_req
prompt = no
[v3_req]
keyUsage = keyEncipherment, digitalSignature
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = www.example.com
EOF
Then create a certificate using this openssl configuration file:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA ca.crt -CAkey ca.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt -extfile openssl.conf -extensions v3_req
Note that the certificate must contain a DNS SAN for the relevant domain.
Store the cert/key in a Secret:
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt
Store the CA Cert in another Secret:
kubectl create configmap example-ca --from-file=ca.crt
Setup TLS on the backend
Patch the existing quickstart backend to enable TLS. The patch will mount the TLS certificate secret into the backend as volume.
kubectl patch deployment backend --type=json --patch '
- op: add
path: /spec/template/spec/containers/0/volumeMounts
value:
- name: secret-volume
mountPath: /etc/secret-volume
- op: add
path: /spec/template/spec/volumes
value:
- name: secret-volume
secret:
secretName: example-cert
items:
- key: tls.crt
path: crt
- key: tls.key
path: key
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: TLS_SERVER_CERT
value: /etc/secret-volume/crt
- op: add
path: /spec/template/spec/containers/0/env/-
value:
name: TLS_SERVER_PRIVKEY
value: /etc/secret-volume/key
'
Create a service that exposes port 443 on the backend service.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
service: backend
name: tls-backend
namespace: default
spec:
selector:
app: backend
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
EOF
Save and apply the following resource to your cluster:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: backend
service: backend
name: tls-backend
namespace: default
spec:
selector:
app: backend
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
Create a BackendTLSPolicy instructing Envoy Gateway to establish a TLS connection with the backend and validate the backend certificate is issued by a trusted CA and contains an appropriate DNS SAN.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
name: enable-backend-tls
namespace: default
spec:
targetRefs:
- group: ''
kind: Service
name: tls-backend
sectionName: "443"
validation:
caCertificateRefs:
- name: example-ca
group: ''
kind: ConfigMap
hostname: www.example.com
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
name: enable-backend-tls
namespace: default
spec:
targetRefs:
- group: ''
kind: Service
name: tls-backend
sectionName: "443"
validation:
caCertificateRefs:
- name: example-ca
group: ''
kind: ConfigMap
hostname: www.example.com
Patch the HTTPRoute’s backend reference, so that it refers to the new TLS-enabled service:
kubectl patch HTTPRoute backend --type=json --patch '
- op: replace
path: /spec/rules/0/backendRefs/0/port
value: 443
- op: replace
path: /spec/rules/0/backendRefs/0/name
value: tls-backend
'
Verify the HTTPRoute status:
kubectl get HTTPRoute backend -o yaml
Testing backend TLS
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Query the example app through the Gateway:
curl -v -HHost:www.example.com --resolve "www.example.com:80:${GATEWAY_HOST}" \
http://www.example.com:80/get
Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend:
< HTTP/1.1 200 OK
[...]
"tls": {
"version": "TLSv1.2",
"serverName": "www.example.com",
"negotiatedProtocol": "http/1.1",
"cipherSuite": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
}
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 80:80 &
Query the TLS-enabled backend through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:80:127.0.0.1" \
http://www.example.com:80/get
Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend:
< HTTP/1.1 200 OK
[...]
"tls": {
"version": "TLSv1.2",
"serverName": "www.example.com",
"negotiatedProtocol": "http/1.1",
"cipherSuite": "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
}
Customize backend TLS Parameters
In addition to enablement of backend TLS with the Gateway-API BackendTLSPolicy, Envoy Gateway supports customizing TLS parameters. To achieve this, the EnvoyProxy resource can be used to specify TLS parameters. We will customize the TLS version in this example.
First, you need to add ParametersRef in GatewayClass, and refer to EnvoyProxy Config:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
You can customize the EnvoyProxy Backend TLS Parameters via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
backendTLS:
minVersion: "1.3"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
backendTLS:
MinVersion: "1.3"
Testing TLS Parameters
Query the TLS-enabled backend through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:80:127.0.0.1" \
http://www.example.com:80/get
Inspect the output and see that the response contains the details of the TLS handshake between Envoy and the backend. The TLS version is now TLS1.3, as configured in the EnvoyProxy resource. The TLS cipher is also changed, since TLS1.3 supports different ciphers from TLS1.2.
< HTTP/1.1 200 OK
[...]
"tls": {
"version": "TLSv1.3",
"serverName": "www.example.com",
"negotiatedProtocol": "http/1.1",
"cipherSuite": "TLS_AES_128_GCM_SHA256"
}
4 - Basic Authentication
This task provides instructions for configuring HTTP Basic authentication. HTTP Basic authentication checks if an incoming request has a valid username and password before routing the request to a backend service.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure HTTP Basic authentication. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Configuration
Envoy Gateway uses .htpasswd format to store the username-password pairs for authentication. The file must be stored in a kubernetes secret and referenced in the SecurityPolicy configuration. The secret is an Opaque secret, and the username-password pairs must be stored in the key “.htpasswd”.
Create a root certificate
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate secret
Create a certificate and a private key for www.example.com
:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Create certificate
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt
Enable HTTPS
Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443
and references the
example-cert
Secret:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: example-cert
'
Create a .htpasswd file
First, create a .htpasswd file with the username and password you want to use for authentication.
Note: Please always use HTTPS with Basic Authentication. This prevents credentials from being transmitted in plain text.
The input password won’t be saved, instead, a hash will be generated and saved in the output file. When a request tries to access protected resources, the password in the “Authorization” HTTP header will be hashed and compared with the saved hash.
Note: only SHA hash algorithm is supported for now.
htpasswd -cbs .htpasswd foo bar
You can also add more users to the file:
htpasswd -bs .htpasswd foo1 bar1
Create a basic-auth secret
Next, create a kubernetes secret with the generated .htpasswd file in the previous step.
kubectl create secret generic basic-auth --from-file=.htpasswd
Create a SecurityPolicy
The below example defines a SecurityPolicy that authenticates requests against the user list in the kubernetes secret generated in the previous step.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: basic-auth-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
basicAuth:
users:
name: "basic-auth"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: basic-auth-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
basicAuth:
users:
name: "basic-auth"
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/basic-auth-example -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
Send a request to the backend service without Authentication
header:
curl -kv -H "Host: www.example.com" "https://${GATEWAY_HOST}/"
You should see 401 Unauthorized
in the response, indicating that the request is not allowed without authentication.
* Connected to 127.0.0.1 (127.0.0.1) port 443
...
* Server certificate:
* subject: CN=www.example.com; O=example organization
* issuer: O=example Inc.; CN=example.com
> GET / HTTP/2
> Host: www.example.com
> User-Agent: curl/8.6.0
> Accept: */*
...
< HTTP/2 401
< content-length: 58
< content-type: text/plain
< date: Wed, 06 Mar 2024 15:59:36 GMT
<
* Connection #0 to host 127.0.0.1 left intact
User authentication failed. Missing username and password.
Send a request to the backend service with Authentication
header:
curl -kv -H "Host: www.example.com" -u 'foo:bar' "https://${GATEWAY_HOST}/"
The request should be allowed and you should see the response from the backend service.
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy and the secret
kubectl delete securitypolicy/basic-auth-example
kubectl delete secret/basic-auth
kubectl delete secret/example-cert
Next Steps
Checkout the Developer Guide to get involved in the project.
5 - CORS
This task provides instructions for configuring Cross-Origin Resource Sharing (CORS) on Envoy Gateway. CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure CORS. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Configuration
When configuring CORS either an origin with a precise hostname can be configured or an hostname containing a wildcard prefix, allowing all subdomains of the specified hostname. In addition to that the entire origin (with or without specifying a scheme) can be a wildcard to allow all origins.
The below example defines a SecurityPolicy that allows CORS for all HTTP requests originating from www.foo.com
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: cors-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
cors:
allowOrigins:
- "http://*.foo.com"
- "http://*.foo.com:80"
allowMethods:
- GET
- POST
allowHeaders:
- "x-header-1"
- "x-header-2"
exposeHeaders:
- "x-header-3"
- "x-header-4"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: cors-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
cors:
allowOrigins:
- "http://*.foo.com"
- "http://*.foo.com:80"
allowMethods:
- GET
- POST
allowHeaders:
- "x-header-1"
- "x-header-2"
exposeHeaders:
- "x-header-3"
- "x-header-4"
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/cors-example -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
Verify that the CORS headers are present in the response of the OPTIONS request from http://www.foo.com
:
curl -H "Origin: http://www.foo.com" \
-H "Host: www.example.com" \
-H "Access-Control-Request-Method: GET" \
-X OPTIONS -v -s \
http://$GATEWAY_HOST \
1> /dev/null
You should see the below response, indicating that the request from http://www.foo.com
is allowed:
< access-control-allow-origin: http://www.foo.com
< access-control-allow-methods: GET, POST
< access-control-allow-headers: x-header-1, x-header-2
< access-control-max-age: 86400
< access-control-expose-headers: x-header-3, x-header-4
If you try to send a request from http://www.bar.com
, you should see the below response:
curl -H "Origin: http://www.bar.com" \
-H "Host: www.example.com" \
-H "Access-Control-Request-Method: GET" \
-X OPTIONS -v -s \
http://$GATEWAY_HOST \
1> /dev/null
You won’t see any CORS headers in the response, indicating that the request from http://www.bar.com
was not allowed.
If you try to send a request from http://www.foo.com:8080
, you should also see similar response because the port number
8080
is not included in the allowed origins.
```shell
curl -H "Origin: http://www.foo.com:8080" \
-H "Host: www.example.com" \
-H "Access-Control-Request-Method: GET" \
-X OPTIONS -v -s \
http://$GATEWAY_HOST \
1> /dev/null
Note:
- CORS specification requires that the browsers to send a preflight request to the server to ask if it’s allowed to access the limited resource in another domains. The browsers are supposed to follow the response from the server to determine whether to send the actual request or not. The CORS filter only response to the preflight requests according to its configuration. It won’t deny any requests. The browsers are responsible for enforcing the CORS policy.
- The targeted HTTPRoute or the HTTPRoutes that the targeted Gateway routes to must allow the OPTIONS method for the CORS filter to work. Otherwise, the OPTIONS request won’t match the routes and the CORS filter won’t be invoked.
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy:
kubectl delete securitypolicy/cors-example
Next Steps
Checkout the Developer Guide to get involved in the project.
6 - External Authorization
This task provides instructions for configuring external authentication.
External authorization calls an external HTTP or gRPC service to check whether an incoming HTTP request is authorized or not. If the request is deemed unauthorized, then the request will be denied with a 403 (Forbidden) response. If the request is authorized, then the request will be allowed to proceed to the backend service.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure external authorization. This instantiated resource can be linked to a Gateway and HTTPRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
HTTP External Authorization Service
Installation
Install a demo HTTP service that will be used as the external authorization service:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-http-service.yaml
Create a new HTTPRoute resource to route traffic on the path /myapp
to the backend service.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /myapp
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /myapp
backendRefs:
- name: backend
port: 3000
Verify the HTTPRoute status:
kubectl get httproute/myapp -o yaml
Configuration
Create a new SecurityPolicy resource to configure the external authorization. This SecurityPolicy targets the HTTPRoute
“myApp” created in the previous step. It calls the HTTP external authorization service “http-ext-auth” on port 9002 for
authorization. The headersToBackend
field specifies the headers that will be sent to the backend service if the request
is successfully authorized.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: ext-auth-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: myapp
extAuth:
http:
backendRefs:
- name: http-ext-auth
port: 9002
headersToBackend: ["x-current-user"]
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: ext-auth-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: myapp
extAuth:
http:
backendRefs:
- name: http-ext-auth
port: 9002
headersToBackend: ["x-current-user"]
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/ext-auth-example -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
Send a request to the backend service without Authentication
header:
curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/myapp"
You should see 403 Forbidden
in the response, indicating that the request is not allowed without authentication.
* Connected to 172.18.255.200 (172.18.255.200) port 80 (#0)
> GET /myapp HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.68.0
> Accept: */*
...
< HTTP/1.1 403 Forbidden
< date: Mon, 11 Mar 2024 03:41:15 GMT
< x-envoy-upstream-service-time: 0
< content-length: 0
<
* Connection #0 to host 172.18.255.200 left intact
Send a request to the backend service with Authentication
header:
curl -v -H "Host: www.example.com" -H "Authorization: Bearer token1" "http://${GATEWAY_HOST}/myapp"
The request should be allowed and you should see the response from the backend service.
Because the x-current-user
header from the auth response has been sent to the backend service,
you should see the x-current-user
header in the response.
"X-Current-User": [
"user1"
],
GRPC External Authorization Service
Installation
Install a demo gRPC service that will be used as the external authorization service. The demo gRPC service is enabled with TLS and a BackendTLSConfig is created to configure the communication between the Envoy proxy and the gRPC service.
Note: TLS is optional for HTTP or gRPC external authorization services. However, enabling TLS is recommended for enhanced security in production environments.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-grpc-service.yaml
The HTTPRoute created in the previous section is still valid and can be used with the gRPC auth service, but if you have not created the HTTPRoute, you can create it now.
Create a new HTTPRoute resource to route traffic on the path /myapp
to the backend service.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /myapp
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /myapp
backendRefs:
- name: backend
port: 3000
Verify the HTTPRoute status:
kubectl get httproute/myapp -o yaml
Configuration
Update the SecurityPolicy that was created in the previous section to use the gRPC external authorization service. It calls the gRPC external authorization service “grpc-ext-auth” on port 9002 for authorization.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: ext-auth-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: myapp
extAuth:
grpc:
backendRefs:
- name: grpc-ext-auth
port: 9002
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: ext-auth-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: myapp
extAuth:
grpc:
backendRefs:
- name: grpc-ext-auth
port: 9002
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/ext-auth-example -o yaml
Because the gRPC external authorization service is enabled with TLS, a BackendTLSConfig needs to be created to configure the communication between the Envoy proxy and the gRPC auth service.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
name: grpc-ext-auth-btls
spec:
targetRefs:
- group: ''
kind: Service
name: grpc-ext-auth
sectionName: "9002"
validation:
caCertificateRefs:
- name: grpc-ext-auth-ca
group: ''
kind: ConfigMap
hostname: grpc-ext-auth
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1alpha3
kind: BackendTLSPolicy
metadata:
name: grpc-ext-auth-btls
spec:
targetRefs:
- group: ''
kind: Service
name: grpc-ext-auth
sectionName: "9002"
validation:
caCertificateRefs:
- name: grpc-ext-auth-ca
group: ''
kind: ConfigMap
hostname: grpc-ext-auth
Verify the BackendTLSPolicy configuration:
kubectl get backendtlspolicy/grpc-ext-auth-btls -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
Send a request to the backend service without Authentication
header:
curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/myapp"
You should see 403 Forbidden
in the response, indicating that the request is not allowed without authentication.
* Connected to 172.18.255.200 (172.18.255.200) port 80 (#0)
> GET /myapp HTTP/1.1
> Host: www.example.com
> User-Agent: curl/7.68.0
> Accept: */*
...
< HTTP/1.1 403 Forbidden
< date: Mon, 11 Mar 2024 03:41:15 GMT
< x-envoy-upstream-service-time: 0
< content-length: 0
<
* Connection #0 to host 172.18.255.200 left intact
Send a request to the backend service with Authentication
header:
curl -v -H "Host: www.example.com" -H "Authorization: Bearer token1" "http://${GATEWAY_HOST}/myapp"
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the demo auth services, HTTPRoute, SecurityPolicy and BackendTLSPolicy:
kubectl delete -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-http-service.yaml
kubectl delete -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/ext-auth-grpc-service.yaml
kubectl delete httproute/myapp
kubectl delete securitypolicy/ext-auth-example
kubectl delete backendtlspolicy/grpc-ext-auth-btls
Next Steps
Checkout the Developer Guide to get involved in the project.
7 - IP Allowlist/Denylist
This task provides instructions for configuring IP allowlist/denylist on Envoy Gateway. IP allowlist/denylist checks if an incoming request is from an allowed IP address before routing the request to a backend service.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure IP allowlist/denylist. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Configuration
Create a SecurityPolicy
The below SecurityPolicy restricts access to the backend service by allowing requests only from the IP addresses 10.0.1.0/24
.
In this example, the default action is set to Deny
, which means that only requests from the specified IP addresses with Allow
action are allowed, and all other requests are denied. You can also change the default action to Allow
to allow all requests
except those from the specified IP addresses with Deny
action.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: authorization-client-ip
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
authorization:
defaultAction: Deny
rules:
- action: Allow
principal:
clientCIDRs:
- 10.0.1.0/24
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: authorization-client-ip
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
authorization:
defaultAction: Deny
rules:
- action: Allow
principal:
clientCIDRs:
- 10.0.1.0/24
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/authorization-client-ip -o yaml
Original Source IP
It’s important to note that the IP address used for allowlist/denylist is the original source IP address of the request. You can use a ClientTrafficPolicy to configure how Envoy Gateway should determine the original source IP address.
For example, the below ClientTrafficPolicy configures Envoy Gateway to use the X-Forwarded-For
header to determine the original source IP address.
The numTrustedHops
field specifies the number of trusted hops in the X-Forwarded-For
header. In this example, the numTrustedHops
is set to 1
,
which means that the first rightmost IP address in the X-Forwarded-For
header is used as the original source IP address.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-client-ip-detection
spec:
clientIPDetection:
xForwardedFor:
numTrustedHops: 1
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-client-ip-detection
spec:
clientIPDetection:
xForwardedFor:
numTrustedHops: 1
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
Send a request to the backend service without the X-Forwarded-For
header:
curl -v -H "Host: www.example.com" "http://${GATEWAY_HOST}/"
You should see 403 Forbidden
in the response, indicating that the request is not allowed.
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /get HTTP/1.1
> Host: www.example.com
> User-Agent: curl/8.8.0-DEV
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 403 Forbidden
< content-length: 19
< content-type: text/plain
< date: Mon, 08 Jul 2024 04:23:31 GMT
<
* Connection #0 to host 172.18.255.200 left intact
RBAC: access denied
Send a request to the backend service with the X-Forwarded-For
header:
curl -v -H "Host: www.example.com" -H "X-Forwarded-For: 10.0.1.1" "http://${GATEWAY_HOST}/"
The request should be allowed and you should see the response from the backend service.
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy and the ClientTrafficPolicy
kubectl delete securitypolicy/authorization-client-ip
kubectl delete clientTrafficPolicy/enable-client-ip-detection
Next Steps
Checkout the Developer Guide to get involved in the project.
8 - JWT Authentication
This task provides instructions for configuring JSON Web Token (JWT) authentication. JWT authentication checks
if an incoming request has a valid JWT before routing the request to a backend service. Currently, Envoy Gateway only
supports validating a JWT from an HTTP header, e.g. Authorization: Bearer <token>
.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure JWT authentication. This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
For GRPC - follow the steps from the GRPC Routing example.
Configuration
Allow requests with a valid JWT by creating an SecurityPolicy and attaching it to the example HTTPRoute or GRPCRoute.
HTTPRoute
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/jwt/jwt.yaml
Two HTTPRoute has been created, one for /foo
and another for /bar
. A SecurityPolicy has been created and targeted
HTTPRoute foo to authenticate requests for /foo
. The HTTPRoute bar is not targeted by the SecurityPolicy and will allow
unauthenticated requests to /bar
.
Verify the HTTPRoute configuration and status:
kubectl get httproute/foo -o yaml
kubectl get httproute/bar -o yaml
The SecurityPolicy is configured for JWT authentication and uses a single JSON Web Key Set (JWKS) provider for authenticating the JWT.
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/jwt-example -o yaml
GRPCRoute
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/jwt/grpc-jwt.yaml
A SecurityPolicy has been created and targeted GRPCRoute yages to authenticate all requests for yages
service..
Verify the GRPCRoute configuration and status:
kubectl get grpcroute/yages -o yaml
The SecurityPolicy is configured for JWT authentication and uses a single JSON Web Key Set (JWKS) provider for authenticating the JWT.
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/jwt-example -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
HTTPRoute
Verify that requests to /foo
are denied without a JWT:
curl -sS -o /dev/null -H "Host: www.example.com" -w "%{http_code}\n" http://$GATEWAY_HOST/foo
A 401
HTTP response code should be returned.
Get the JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
Note: The above command decodes and returns the token’s payload. You can replace f2
with f1
to view the token’s
header.
Verify that a request to /foo
with a valid JWT is allowed:
curl -sS -o /dev/null -H "Host: www.example.com" -H "Authorization: Bearer $TOKEN" -w "%{http_code}\n" http://$GATEWAY_HOST/foo
A 200
HTTP response code should be returned.
Verify that requests to /bar
are allowed without a JWT:
curl -sS -o /dev/null -H "Host: www.example.com" -w "%{http_code}\n" http://$GATEWAY_HOST/bar
GRPCRoute
Verify that requests to yages
service are denied without a JWT:
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
You should see the below response
Error invoking method "yages.Echo/Ping": rpc error: code = Unauthenticated desc = failed to query for service descriptor "yages.Echo": Jwt is missing
Get the JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
Note: The above command decodes and returns the token’s payload. You can replace f2
with f1
to view the token’s
header.
Verify that a request to yages
service with a valid JWT is allowed:
grpcurl -plaintext -H "authorization: Bearer $TOKEN" -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
You should see the below response
{
"text": "pong"
}
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy:
kubectl delete securitypolicy/jwt-example
Next Steps
Checkout the Developer Guide to get involved in the project.
9 - JWT Claim-Based Authorization
This task provides instructions for configuring JWT claim-based authorization. JWT claim-based authorization checks if an incoming request has the required JWT claims before routing the request to a backend service.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure JWT claim-based authorization.
This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Configuration
Create a SecurityPolicy
Please note that the JWT claim-based authorization requires the JWT token to be present in the request. A JWT authentication must be configured in the same SecurityPolicy to validate the JWT token and extract the claims.
The below SecurityPolicy configuration allows requests with a valid JWT token that has the following claims:
user.name
claim with the valueJohn Doe
user.roles
claim with the valueadmin
scope
claim with the valuesread
,add
, andmodify
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: authorization-jwt-claim
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
jwt:
providers:
- name: example
issuer: https://foo.bar.com
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/refs/heads/main/examples/kubernetes/jwt/jwks.json
authorization:
defaultAction: Deny
rules:
- name: "allow"
action: Allow
principal:
jwt:
provider: example
scopes: ["read", "add", "modify"]
claims:
- name: user.name
values: ["John Doe"]
- name: user.roles
valueType: StringArray
values: ["admin"]
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: authorization-jwt-claim
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
jwt:
providers:
- name: example
issuer: https://foo.bar.com
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/refs/heads/main/examples/kubernetes/jwt/jwks.json
authorization:
defaultAction: Deny
rules:
- name: "allow"
action: Allow
principal:
jwt:
provider: example
scopes: ["read", "add", "modify"]
claims:
- name: user.name
values: ["John Doe"]
- name: user.roles
valueType: StringArray
values: ["admin"]
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/authorization-jwt-claim -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart is set. If not, follow the
Quickstart instructions to set the variable.
echo $GATEWAY_HOST
Define a JWT token with the required claims.
export VALID_TOKEN="eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImI1MjBiM2MyYzRiZDc1YTEwZTljZWJjOTU3NjkzM2RjIn0.eyJpc3MiOiJodHRwczovL2Zvby5iYXIuY29tIiwic3ViIjoiMTIzNDU2Nzg5MCIsInVzZXIiOnsibmFtZSI6IkpvaG4gRG9lIiwiZW1haWwiOiJqb2huLmRvZUBleGFtcGxlLmNvbSIsInJvbGVzIjpbImFkbWluIiwiZWRpdG9yIl19LCJwcmVtaXVtX3VzZXIiOnRydWUsImlhdCI6MTUxNjIzOTAyMiwic2NvcGUiOiJyZWFkIGFkZCBkZWxldGUgbW9kaWZ5In0.P36iAlmiRCC79OiB3vstF5Q_9OqUYAMGF3a3H492GlojbV6DcuOz8YIEYGsRSWc-BNJaBKlyvUKsKsGVPtYbbF8ajwZTs64wyO-zhd2R8riPkg_HsW7iwGswV12f5iVRpfQ4AG2owmdOToIaoch0aym89He1ZzEjcShr9olgqlAbbmhnk-namd1rP-xpzPnWhhIVI3mCz5hYYgDTMcM7qbokM5FzFttTRXAn5_Luor23U1062Ct_K53QArwxBvwJ-QYiqcBycHf-hh6sMx_941cUswrZucCpa-EwA3piATf9PKAyeeWHfHV9X-y8ipGOFg3mYMMVBuUZ1lBkJCik9f9kboRY6QzpOISARQj9PKMXfxZdIPNuGmA7msSNAXQgqkvbx04jMwb9U7eCEdGZztH4C8LhlRjgj0ZdD7eNbRjeH2F6zrWyMUpGWaWyq6rMuP98W2DWM5ZflK6qvT1c7FuFsWPvWLkgxQwTWQKrHdKwdbsu32Sj8VtUBJ0-ddEb"
Decode the JWT token to verify that it has the required claims.
jq -R 'split(".") | .[0],.[1] | @base64d | fromjson' <<< $(echo ${VALID_TOKEN})
The decoded JWT token should look like the following:
{
"typ": "JWT",
"alg": "RS256",
"kid": "b520b3c2c4bd75a10e9cebc9576933dc"
}
{
"iss": "https://foo.bar.com",
"sub": "1234567890",
"user": {
"name": "John Doe",
"email": "john.doe@example.com",
"roles": [
"admin",
"editor"
]
},
"premium_user": true,
"iat": 1516239022,
"scope": "read add delete modify"
}
Send a request to the backend service with the valid JWT token:
curl -H "Host: www.example.com" -H "Authorization: Bearer ${VALID_TOKEN}" "http://${GATEWAY_HOST}/"
The request should be allowed and you should see the response from the backend service.
Define a JWT token without the required claims.
export INVALID_TOKEN="eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6ImI1MjBiM2MyYzRiZDc1YTEwZTljZWJjOTU3NjkzM2RjIn0.eyJpc3MiOiJodHRwczovL2Zvby5iYXIuY29tIiwic3ViIjoiMTIzNDU2Nzg5MCIsInVzZXIiOnsibmFtZSI6IkFsaWNlIFNtaXRoIiwiZW1haWwiOiJhbGljZS5zbWl0aEBleGFtcGxlLmNvbSIsInJvbGVzIjpbImRldmVsb3BlciJdfSwicHJlbWl1bV91c2VyIjpmYWxzZSwiaWF0IjoxNTE2MjM5MDIyLCJzY29wZSI6InJlYWQgYWRkIGRlbGV0ZSJ9.Da547nNXzuQXm5E7LuLAiyFswXsW4RDhuitD_rpadtR7PTwzzOsJoqrVWJ_u1jJDaOTWIpLF4gwxDoY-Aoz_couzXzlAbECLs45ZFoc_UdffpfIbGKqTZx8VtwKuDLFsAeDDDqqx1flxFhvXHftJJdZYr1FgFz9u-absMmRU90DLmEZX3Hnyc8k8eBgeiu6vsWUD0-aNy8cWkFRbwRggkGmucFyUTG8Z1MY3iyH5E66W-ISoX8G9bzE9PTxVAAPDTvefD5iLJPSDJ8qV69OuMCJ8Dczq0L9Dd_w0sF-D1s9MTvexmGg4zBWluJ3r-pU9NHEdhqBypehp_yH8xF5Rt9AE7stZ4oPFZNyfrtkE-4IOnSEkMmzcC65g_rscn0ycerv4N5ZNpkr0x2IYYM4iGuo-ULv5Htnli3rffST45kx1XA8cdsrT1D0K3aPxdIxDIk8sTJf5-WVqRyo-bwxXXltwQLB9jCM_7QbTWQBYAJwUpi-0RW4jCl44-42gZnXf"
Decode the JWT token to verify that it does not have the required claims.
jq -R 'split(".") | .[0],.[1] | @base64d | fromjson' <<< $(echo ${INVALID_TOKEN})
The decoded JWT token should look like the following:
{
"typ": "JWT",
"alg": "RS256",
"kid": "b520b3c2c4bd75a10e9cebc9576933dc"
}
{
"iss": "https://foo.bar.com",
"sub": "1234567890",
"user": {
"name": "Alice Smith",
"email": "alice.smith@example.com",
"roles": [
"developer"
]
},
"premium_user": false,
"iat": 1516239022,
"scope": "read add delete"
}
Send a request to the backend service with the invalid JWT token:
curl -v -H "Host: www.example.com" -H "Authorization: Bearer ${INVALID_TOKEN}" "http://${GATEWAY_HOST}/"
The request should be denied and you should see a 403 Forbidden
response.
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy and the ClientTrafficPolicy
kubectl delete securitypolicy/authorization-jwt-claim
Next Steps
Checkout the Developer Guide to get involved in the project.
10 - Mutual TLS: External Clients to the Gateway
This task demonstrates how mutual TLS can be achieved between external clients and the Gateway. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
TLS Certificates
Generate the certificates and keys used by the Gateway to terminate client TLS connections.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for www.example.com
:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt --certificate-authority=example.com.crt
Store the CA Cert in another Secret:
kubectl create secret generic example-ca-cert --from-file=ca.crt=example.com.crt
Create a certificate and a private key for the client client.example.com
:
openssl req -out client.example.com.csr -newkey rsa:2048 -nodes -keyout client.example.com.key -subj "/CN=client.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in client.example.com.csr -out client.example.com.crt
Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443
and references the
example-cert
Secret:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: example-cert
'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Create a ClientTrafficPolicy to enforce client validation using the CA Certificate as a trusted anchor.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-mtls
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
tls:
clientValidation:
caCertificateRefs:
- kind: "Secret"
group: ""
name: "example-ca-cert"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-mtls
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
tls:
clientValidation:
caCertificateRefs:
- kind: "Secret"
group: ""
name: "example-ca-cert"
Testing
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Query the example app through the Gateway:
curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cert client.example.com.crt --key client.example.com.key \
--cacert example.com.crt https://www.example.com/get
Don’t specify the client key and certificate in the above command, and ensure that the connection fails:
curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &
Query the example app through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cert client.example.com.crt --key client.example.com.key \
--cacert example.com.crt https://www.example.com:8443/get
11 - OIDC Authentication
This task provides instructions for configuring OpenID Connect (OIDC) authentication. OpenID Connect (OIDC) is an authentication standard built on top of OAuth 2.0. It enables EG to rely on authentication that is performed by an OpenID Connect Provider (OP) to verify the identity of a user.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure OIDC authentication. This instantiated resource can be linked to a Gateway and HTTPRoute resource.
Prerequisites
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
EG OIDC authentication requires the redirect URL to be HTTPS. Follow the Secure Gateways guide to generate the TLS certificates and update the Gateway configuration to add an HTTPS listener.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Let’s create an HTTPRoute that represents an application protected by OIDC.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
spec:
parentRefs:
- name: eg
hostnames: ["www.example.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /myapp
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp
spec:
parentRefs:
- name: eg
hostnames: ["www.example.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /myapp
backendRefs:
- name: backend
port: 3000
Verify the HTTPRoute status:
kubectl get httproute/myapp -o yaml
OIDC Authentication for a HTTPRoute
OIDC can be configured at the Gateway level to authenticate all the HTTPRoutes that are associated with the Gateway with the same OIDC configuration, or at the HTTPRoute level to authenticate each HTTPRoute with different OIDC configurations.
This section demonstrates how to configure OIDC authentication for a specific HTTPRoute.
Register an OIDC application
This task uses Google as the OIDC provider to demonstrate the configuration of OIDC. However, EG works with any OIDC providers, including Auth0, Azure AD, Keycloak, Okta, OneLogin, Salesforce, UAA, etc.
Follow the steps in the Google OIDC documentation to register an OIDC application. Please make sure the
redirect URL is set to the one you configured in the SecurityPolicy that you will create in the step below. In this example,
the redirect URL is https://www.example.com:8443/myapp/oauth2/callback
.
After registering the application, you should have the following information:
- Client ID: The client ID of the OIDC application.
- Client Secret: The client secret of the OIDC application.
Create a kubernetes secret
Next, create a kubernetes secret with the Client Secret created in the previous step. The secret is an Opaque secret, and the Client Secret must be stored in the key “client-secret”.
Note: please replace the ${CLIENT_SECRET} with the actual Client Secret that you got from the previous step.
kubectl create secret generic my-app-client-secret --from-literal=client-secret=${CLIENT_SECRET}
Create a SecurityPolicy
Please notice that the redirectURL
and logoutPath
must match the target HTTPRoute. In this example, the target
HTTPRoute is configured to match the host www.example.com
and the path /myapp
, so the redirectURL
must be prefixed
with https://www.example.com:8443/myapp
, and logoutPath
must be prefixed with/myapp
, otherwise the OIDC authentication
will fail because the redirect and logout requests will not match the target HTTPRoute and therefore can’t be processed
by the OAuth2 filter on that HTTPRoute.
Note: please replace the ${CLIENT_ID} in the below yaml snippet with the actual Client ID that you got from the OIDC provider.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: oidc-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: myapp
oidc:
provider:
issuer: "https://accounts.google.com"
clientID: "${CLIENT_ID}"
clientSecret:
name: "my-app-client-secret"
redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
logoutPath: "/myapp/logout"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: oidc-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: HTTPRoute
name: myapp
oidc:
provider:
issuer: "https://accounts.google.com"
clientID: "${CLIENT_ID}"
clientSecret:
name: "my-app-client-secret"
redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
logoutPath: "/myapp/logout"
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/oidc-example -o yaml
Testing
Port forward gateway port to localhost:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443
Put www.example.com in the /etc/hosts file in your test machine, so we can use this host name to access the gateway from a browser:
...
127.0.0.1 www.example.com
Open a browser and navigate to the https://www.example.com:8443/myapp
address. You should be redirected to the Google
login page. After you successfully login, you should see the response from the backend service.
Clean the cookies in the browser and try to access https://www.example.com:8443/foo
address. You should be able to see
this page since the path /foo
is not protected by the OIDC policy.
OIDC Authentication for a Gateway
OIDC can be configured at the Gateway level to authenticate all the HTTPRoutes that are associated with the Gateway with the same OIDC configuration, or at the HTTPRoute level to authenticate each HTTPRoute with different OIDC configurations.
This section demonstrates how to configure OIDC authentication for a Gateway.
Register an OIDC application
If you haven’t registered an OIDC application, follow the steps in the previous section to register an OIDC application.
Create a kubernetes secret
If you haven’t created a kubernetes secret, follow the steps in the previous section to create a kubernetes secret.
Create an HTTPRoute with a different subdomain
Let’s create another HTTPRoute in the same Gateway, but with a different subdomain.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: foo
spec:
parentRefs:
- name: eg
hostnames: ["foo.example.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: backend
port: 3000
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: foo
spec:
parentRefs:
- name: eg
hostnames: ["foo.example.com"]
rules:
- matches:
- path:
type: PathPrefix
value: /foo
backendRefs:
- name: backend
port: 3000
Verify the HTTPRoute status:
kubectl get httproute/foo -o yaml
Create a SecurityPolicy
Create or update the SecurityPolicy to target the Gateway instead of the HTTPRoute. Please notice that the redirectURL
and logoutPath
must match one of the HTTPRoutes associated with the Gateway. In this example, the target Gateway has
three HTTPRoutes associated with it, one with the host www.example.com
and the path /myapp
, one with the host
www.example.com
and the path /
, and one with the host foo.example.com
and the path /
. Any of these HTTPRoutes
can be used to match the redirectURL
and logoutPath
.
By default, the access token and ID token cookies are set to the host of the request, excluding subdomains. To allow the
token cookies to be shared across subdomains and prevent users from having to log in again when switching between subdomains,
the cookieDomain
field needs to be set to the root domain. In this example, the root domain is example.com
.
Note: if a cookieDomain
is added to an existing SecurityPolicy, the cookies in the browser must be cleared before sending a new request to the Gateway, otherwise the cookies with the old subdomain will take precedence and be sent to the Gateway, causing the OIDC authentication to fail.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: oidc-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
oidc:
provider:
issuer: "https://accounts.google.com"
clientID: "${CLIENT_ID}"
clientSecret:
name: "my-app-client-secret"
redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
logoutPath: "/myapp/logout"
cookieDomain: "example.com"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: oidc-example
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
oidc:
provider:
issuer: "https://accounts.google.com"
clientID: "${CLIENT_ID}"
clientSecret:
name: "my-app-client-secret"
redirectURL: "https://www.example.com:8443/myapp/oauth2/callback"
logoutPath: "/myapp/logout"
cookieDomain: "example.com"
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/oidc-example -o yaml
Update the Listener TLS certificate to support multiple subdomains
Create a multi-domain wildcard certificate for *.example.com
.
openssl req -out wildcard.csr -newkey rsa:2048 -nodes -keyout wildcard.key -subj "/CN=*.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in wildcard.csr -out wildcard.crt
Replace the TLS certificate of the Gateway with the wildcard certificate.
kubectl delete secret example-cert
kubectl create secret tls example-cert --key=wildcard.key --cert=wildcard.crt
Testing
If you haven’t done so, follow the steps in the previous section to port forward gateway port to localhost and put www.example.com in the /etc/hosts file in your test machine.
Also, put foo.example.com in the /etc/hosts file in your test machine.
...
127.0.0.1 foo.example.com
Open a browser and navigate to the https://www.example.com:8443/myapp
address. You should be redirected to the Google
login page. After you successfully login, you should see the response from the backend service.
You can also try to access https://foo.example.com:8443
and https://www.example.com:8443/bar
addresses. You should
be able to see the response from the backend service since these HTTPRoutes are also protected by the same OIDC config,
and the cookies are shared across subdomains.
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy, the secret and the HTTPRoute:
kubectl delete securitypolicy/oidc-example
kubectl delete secret/my-app-client-secret
kubectl delete httproute/myapp
kubectl delete httproute/foo
Next Steps
Checkout the Developer Guide to get involved in the project.
12 - Secure Gateways
This task will help you get started using secure Gateways. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
TLS Certificates
Generate the certificates and keys used by the Gateway to terminate client TLS connections.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for www.example.com
:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt
Update the Gateway from the Quickstart to include an HTTPS listener that listens on port 443
and references the
example-cert
Secret:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: example-cert
'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Testing
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Query the example app through the Gateway:
curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &
Query the example app through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get
Multiple HTTPS Listeners
Create a TLS cert/key for the additional HTTPS listener:
openssl req -out foo.example.com.csr -newkey rsa:2048 -nodes -keyout foo.example.com.key -subj "/CN=foo.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in foo.example.com.csr -out foo.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls foo-cert --key=foo.example.com.key --cert=foo.example.com.crt
Create another HTTPS listener on the example Gateway:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: https-foo
protocol: HTTPS
port: 443
hostname: foo.example.com
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: foo-cert
'
Update the HTTPRoute to route traffic for hostname foo.example.com
to the example backend service:
kubectl patch httproute backend --type=json --patch '
- op: add
path: /spec/hostnames/-
value: foo.example.com
'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Follow the steps in the Testing section to test connectivity to the backend app through both Gateway
listeners. Replace www.example.com
with foo.example.com
to test the new HTTPS listener.
Cross Namespace Certificate References
A Gateway can be configured to reference a certificate in a different namespace. This is allowed by a ReferenceGrant created in the target namespace. Without the ReferenceGrant, a cross-namespace reference is invalid.
Before proceeding, ensure you can query the HTTPS backend service from the Testing section.
To demonstrate cross namespace certificate references, create a ReferenceGrant that allows Gateways from the “default” namespace to reference Secrets in the “envoy-gateway-system” namespace:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: example
namespace: envoy-gateway-system
spec:
from:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: default
to:
- group: ""
kind: Secret
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: ReferenceGrant
metadata:
name: example
namespace: envoy-gateway-system
spec:
from:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: default
to:
- group: ""
kind: Secret
Delete the previously created Secret:
kubectl delete secret/example-cert
The Gateway HTTPS listener should now surface the Ready: False
status condition and the example HTTPS backend should
no longer be reachable through the Gateway.
kubectl get gateway/eg -o yaml
Recreate the example Secret in the envoy-gateway-system
namespace:
kubectl create secret tls example-cert -n envoy-gateway-system --key=www.example.com.key --cert=www.example.com.crt
Update the Gateway HTTPS listener with namespace: envoy-gateway-system
, for example:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: example-cert
namespace: envoy-gateway-system
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: example-cert
namespace: envoy-gateway-system
The Gateway HTTPS listener status should now surface the Ready: True
condition and you should once again be able to
query the HTTPS backend through the Gateway.
Lastly, test connectivity using the above Testing section.
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the Secrets:
kubectl delete secret/example-cert
kubectl delete secret/foo-cert
RSA + ECDSA Dual stack certificates
This section gives a walkthrough to generate RSA and ECDSA derived certificates and keys for the Server, which can then be configured in the Gateway listener, to terminate TLS traffic.
Prerequisites
Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Follow the steps in the TLS Certificates section to generate self-signed RSA derived Server certificate and private key, and configure those in the Gateway listener configuration to terminate HTTPS traffic.
Pre-checks
While testing in Cluster without External LoadBalancer Support, we can query the example app through Envoy proxy while enforcing an RSA cipher, as shown below:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-RSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
Since the Secret configured at this point is an RSA based Secret, if we enforce the usage of an ECDSA cipher, the call should fail as follows
$ curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-ECDSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
* Added www.example.com:8443:127.0.0.1 to DNS cache
* Hostname www.example.com was found in DNS cache
* Trying 127.0.0.1:8443...
* Connected to www.example.com (127.0.0.1) port 8443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
* Cipher selection: ECDHE-ECDSA-CHACHA20-POLY1305
* CAfile: example.com.crt
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* error:1404B410:SSL routines:ST_CONNECT:sslv3 alert handshake failure
* Closing connection 0
Moving forward in the doc, we will be configuring the existing Gateway listener to accept both kinds of ciphers.
TLS Certificates
Reuse the CA certificate and key pair generated in the Secure Gateways task and use this CA to sign both RSA and ECDSA Server certificates.
Note the CA certificate and key names are example.com.crt
and example.com.key
respectively.
Create an ECDSA certificate and a private key for www.example.com
:
openssl ecparam -noout -genkey -name prime256v1 -out www.example.com.ecdsa.key
openssl req -new -SHA384 -key www.example.com.ecdsa.key -nodes -out www.example.com.ecdsa.csr -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -SHA384 -days 365 -in www.example.com.ecdsa.csr -CA example.com.crt -CAkey example.com.key -CAcreateserial -out www.example.com.ecdsa.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert-ecdsa --key=www.example.com.ecdsa.key --cert=www.example.com.ecdsa.crt
Patch the Gateway with this additional ECDSA Secret:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/1/tls/certificateRefs/-
value:
name: example-cert-ecdsa
'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Testing
Again, while testing in Cluster without External LoadBalancer Support, we can query the example app through Envoy proxy while enforcing an RSA cipher, which should work as it did before:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-RSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
...
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
...
Additionally, querying the example app while enforcing an ECDSA cipher should also work now:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-ECDSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
...
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-CHACHA20-POLY1305
...
SNI based Certificate selection
This sections gives a walkthrough to generate multiple certificates corresponding to different FQDNs. The same Gateway listener can then be configured to terminate TLS traffic for multiple FQDNs based on the SNI matching.
Prerequisites
Follow the steps from the Quickstart to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Follow the steps in the TLS Certificates section to generate self-signed RSA derived Server certificate and private key, and configure those in the Gateway listener configuration to terminate HTTPS traffic.
Additional Configurations
Using the TLS Certificates section, we first generate additional Secret for another Host www.sample.com
.
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=sample Inc./CN=sample.com' -keyout sample.com.key -out sample.com.crt
openssl req -out www.sample.com.csr -newkey rsa:2048 -nodes -keyout www.sample.com.key -subj "/CN=www.sample.com/O=sample organization"
openssl x509 -req -days 365 -CA sample.com.crt -CAkey sample.com.key -set_serial 0 -in www.sample.com.csr -out www.sample.com.crt
kubectl create secret tls sample-cert --key=www.sample.com.key --cert=www.sample.com.crt
Note that all occurrences of example.com
were just replaced with sample.com
Next we update the Gateway
configuration to accommodate the new Certificate which will be used to Terminate TLS traffic:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/1/tls/certificateRefs/-
value:
name: sample-cert
'
Finally, we update the HTTPRoute to route traffic for hostname www.sample.com
to the example backend service:
kubectl patch httproute backend --type=json --patch '
- op: add
path: /spec/hostnames/-
value: www.sample.com
'
Testing
Refer to the steps mentioned earlier under Testing in clusters with External LoadBalancer Support
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &
Query the example app through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -I
Similarly, query the sample app through the same Envoy proxy:
curl -v -HHost:www.sample.com --resolve "www.sample.com:8443:127.0.0.1" \
--cacert sample.com.crt https://www.sample.com:8443/get -I
Since the multiple certificates are configured on the same Gateway listener, Envoy was able to provide the client with appropriate certificate based on the SNI in the client request.
Customize Gateway TLS Parameters
In addition to enablement of TLS with Gateway-API, Envoy Gateway supports customizing TLS parameters. To achieve this, the ClientTrafficPolicy resource can be used to specify TLS parameters. We will customize the minimum supported TLS version in this example to TLSv1.3.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enforce-tls-13
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
tls:
minVersion: "1.3"
EOF
Save and apply the following resource to your cluster:
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enforce-tls-13
namespace: default
spec:
targetRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: eg
tls:
minVersion: "1.3"
Testing TLS Parameters
Attempt to connecting using an unsupported TLS version:
curl -v -HHost:www.sample.com --resolve "www.sample.com:8443:127.0.0.1" \
--cacert sample.com.crt --tlsv1.2 --tls-max 1.2 https://www.sample.com:8443/get -I
[...]
* ALPN: curl offers h2,http/1.1
* (304) (OUT), TLS handshake, Client hello (1):
* LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
* Closing connection
curl: (35) LibreSSL/3.3.6: error:1404B42E:SSL routines:ST_CONNECT:tlsv1 alert protocol version
The output shows that the connection fails due to an unsupported TLS protocol version used by the client. Now, connect to the Gateway without specifying a client version, and note that the connection is established with TLSv1.3.
curl -v -HHost:www.sample.com --resolve "www.sample.com:8443:127.0.0.1" \
--cacert sample.com.crt https://www.sample.com:8443/get -I
[...]
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF
Next Steps
Checkout the Developer Guide to get involved in the project.
13 - Threat Model
Envoy Gateway Threat Model and End User Recommendations
About
This work was performed by ControlPlane and commissioned by the Linux Foundation. ControlPlane is a global cloud native and open source cybersecurity consultancy, trusted as the partner of choice in securing: multinational banks; major public clouds; international financial institutions; critical national infrastructure programs; multinational oil and gas companies, healthcare and insurance providers; and global media firms.
Threat Modelling Team
James Callaghan, Torin van den Bulk, Eduardo Olarte
Reviewers
Arko Dasgupta, Matt Turner, Zack Butcher, Marco De Benedictis
Introduction
As we embrace the proliferation of microservice-based architectures in the cloud-native landscape, simplicity in setup and configuration becomes paramount as DevOps teams face the challenge of choosing between numerous similar technologies. One such choice which every team deploying to Kubernetes faces is what to use as an ingress controller. With a plethora of options available, and the existence of vendor-specific annotations leading to small inconsistencies between implementations, the Gateway API project was introduced by the SIG-NETWORK community, with the goal of eventually replacing the Ingress resource.
Envoy Gateway is configured by Gateway API resources, and serves as an intuitive and feature-rich wrapper over the widely acclaimed Envoy Proxy. With a convenient setup based on Kubernetes (K8s) manifests, Envoy Gateway streamlines the management of Envoy Proxy instances in an edge-proxy setting, reducing the operational overhead of managing low-level Envoy configurations. Envoy Gateway benefits cloud-native DevOps teams through its role-oriented configuration, providing granular control based on Role-Based Access Control (RBAC) principles. These features form the basis of our exploration into Envoy Gateway and the rich feature set it brings to the table.
In this threat model, we aim to provide an analysis of Envoy Gateway’s design components and their capabilities (at version 1.0) through a threat-driven approach. It should be noted that this does not constitute a security audit of the Envoy Gateway project, but instead focuses on different possible deployment topologies for Envoy Gateway with the goal of deriving recommendations and best practice guidance for end users.
The Envoy Gateway project recommends a multi-tenancy model whereby each tenant deploys their own Envoy Gateway controller in a namespace which they own. We will also explore the implications and risks associated with multiple tenants using a shared controller.
Scope
The primary focus of this threat model is to identify and assess security risks associated with deploying and operating Envoy Gateway within a multi-tenant Kubernetes (K8s) cluster. This model aims to provide a comprehensive understanding of the system, its transmission points, and potential vulnerabilities to enumerated threats.
In Scope
Envoy Gateway: As the primary focus of this threat model, all aspects of Envoy Gateway, including its configuration, deployment, and operation will be analysed. This includes how the gateway manages TLS certificates, authentication, service-to-service traffic routing, and more.
Kubernetes Cluster: Configuration and operation of the underlying Kubernetes cluster, including how it manages network policies, access control, and resource isolation for different namespaces/tenants in relation to Envoy will be considered.
Tenant Workloads: Tenant workloads (and the pods they run on) will be considered, focusing on how they interact with the Envoy Gateway and potential vulnerabilities that could be exploited.
Out of Scope
This threat model will not consider security risks associated with the underlying infrastructure (e.g., EC2 compute instances and S3 buckets) or non-Envoy related components within the Kubernetes Cluster. It will focus solely on the Envoy Gateway and its interaction with the Kubernetes cluster and tenant workloads.
Implementation of Envoy Gateway as an egress traffic controller is out of scope for this threat model and will not be considered in the report’s findings.
Related Resources
Configuring Envoy as an Edge Proxy
Kubernetes Gateway API Security Model
Architecture Overview
Summary
To provide an in-depth look into both the system design and end-user deployment of Envoy Gateway, we will be focusing on the Deployment Architecture Diagram below.
The Deployment Architecture Diagram provides a high-level model of an end-user deployment of Envoy Gateway. For simplicity, we will look at different deployment topologies on a single multi-tenant Kubernetes cluster. Envoy Gateway operates as an edge proxy within this environment, handling the traffic flow between external interfaces and services within the cluster. The example will use two Envoy Gateway controllers - one dedicated controller for a single tenant, and one shared controller for two other tenants. Each Envoy Gateway controller will accept a single GatewayClass resource.
Deployment Architecture Diagram
As Envoy Gateway implements the Kubernetes GatewayAPI, this threat model will focus on the key objects in the Gateway API resource model:
GatewayClass: defines a set of gateways with a commonconfiguration and behaviour. It is a cluster scoped resource.
Gateway: requests a point where traffic can be translated to Services within the cluster.
Routes: describe how traffic coming via the Gateway maps to theServices.
At the time of writing, Envoy Gateway only supports a Kubernetes provider. As such, we will consider a reference architecture where multiple teams are working on the same Kubernetes cluster within different namespaces (Tenant A, B, & C). We will assume that some teams have similar security and performance needs, and a decision has been made to use a shared Gateway. However, we will also consider the case that some teams require dedicated Gateways, perhaps for compliance reasons or requirements driven by an internal threat model.
We will consider the following organisational roles, as per the Gateway API security model:
Infrastructure provider: The infrastructure provider (infra) is responsible for the overall environment that the cluster(s) are operating in. Examples include: the cloud provider (AWS, Azure, GCP, …) or the PaaS provider in a company.
Cluster operator: The cluster operator (ops) is responsible for administration of entire clusters. They manage policies, network access, application permissions.
Application developer: The application developer (dev) is responsible for defining their application configuration (e.g. timeouts, request matching/filter) and Service composition (e.g. path routing to backends).
Application admin: The application admin has administrative access to some namespaces within a cluster, but not the cluster as a whole.
Our threat model will be based on the high-level setup shown below, where Envoy is used in an edge-proxy scenario:
The following use cases will be considered, in line with the Envoy Gateway tasks:
- Routing and controlling traffic, including:
a. HTTP
b. TCP
c. UDP
d. gRPC
e.TLS passthrough - TLS termination
- Request Authentication
- Rate Limiting
Key Assumptions
This section outlines the foundational premises that shape our analysis and recommendations for the deployment and management of Envoy Gateway within an organisation. The key assumptions are as follows:
1. Kubernetes Provider: For the purposes of this analysis, we assume that a K8s provider will be used to host the cluster.
2. Multi-tenant cluster: In order to produce a broad set of recommendations, it is assumed that within the single cluster, there is:
A dedicated cluster operation (ops) team responsible for maintaining the core cluster infrastructure.
Multiple application teams who wish to define their own Gateway resources, which will route traffic to their respective applications.
3. Soft multi-tenancy model: It is assumed that co-tenants will have some level of trust between themselves, and will not act in an overtly hostile manner to each other.
4. Ingress Control: It’s assumed that Envoy Gateway is the only ingress controller in the K8s cluster as multiple controllers can lead to complex routing challenges and introduce out-of-scope security vulnerabilities.
5. Container Security: This threat model focuses on evaluating the security of the Envoy Gateway and Envoy Proxy images. All other container images running in tenant clusters, not associated with the edge proxy deployment, are assumed to be secure and obtained from trusted registries such as Docker Hub or Google Container Registry (GCR).
6. Cloud Provider Security: It is assumed that the K8s cluster is running on secure cloud infrastructure provided by a trusted Cloud Service Provider (CSP) such as AWS, GCP, or Azure Cloud.
Data
Data Dictionary
Ultimately, the data of interest in a threat model is the business data processed by the system in question. However, in the case of this threat model, we are looking at a generic deployment architecture involving Envoy Gateway in order to draw out a set of generalised threats which can be considered by teams looking to adopt an implementation of Gateway API. As such, we do not know the business impacts of a compromise of confidentiality, integrity or availability that would typically be captured in a data impact assessment. Instead, will we base our threat assessment on high-level groupings of data structures used in the configuration and operation of the general use cases considered (e.g. HTTP routing, TLS termination, request authentication etc.). We will then assign a confidentiality, integrity and availability impact based on a worst-case scenario of how each compromise could potentially affect business data processed by the generic deployment.
Data Name / Type | Notes | Confidentiality | Integrity | Availability |
---|---|---|---|---|
Static Configuration Data | Static configuration data is used to configure Envoy Gateway at startup. This data structure allows for a Provider to be set, which Envoy Gateway calls to establish its runtime configuration, resolve services and persist data. Unauthorised modification of static configuration data could enable the Envoy Gateway admin interface to be configured, logging parameters to be modified, global rate limiting configuration to be misconfigured, or malicious extensions registered for the Envoy Gateway Control Plane. A compromise of confidentiality could potentially give an attacker some useful reconnaissance information. A compromise of the availability of this information at startup time would result in Envoy Gateway starting with default parameters. | Medium | High | Low |
Dynamic Configuration Data | Dynamic configuration data represents the desired state of the Data Plane, and is defined through Envoy Gateway and Gateway API Kubernetes resources. Unauthorised modification of this data could lead to vulnerabilities in an organisation’s Data Plane infrastructure via misconfiguration of an EnvoyProxy custom resource. Misconfiguration of Gateway API objects such as HTTPRoutes or TLSRoutes could result in traffic being directed to incorrect backends. A compromise of confidentiality could potentially give an attacker some useful reconnaissance information. A compromise of the availability of this information could result in tenant application traffic not being routable until the configuration is recovered and reapplied. | Medium | High | Medium |
TLS Private Keys | TLS Private Keys, typically in PEM format, are used to initiate secure connections and encrypt communications. In the context of this threat model, private keys will be associated with the server side of an inbound TLS connection being terminated at a secure gateway configured through Envoy Gateway. Unauthorised exposure could lead to security threats such as person-in-the-middle attacks, whereby the confidentiality or integrity of business data could be compromised. A compromise of integrity may lead to similar consequences if an attacker could insert their own key material. An availability compromise could lead to tenant services being unavailable until new key material is generated and an appropriate CSR submitted. | High | High | Medium |
TLS Certificates | X.509 certificates represent the binding of a public key (associated with the private key described above) to an identity in a TLS handshake. If an attacker could compromise the integrity of a certificate, they may be able to bind the identity of a TLS termination point to a key pair under their control, enabling person-in-the middle attacks. An availability compromise could lead to tenant services being unavailable until new key material is generated and an appropriate CSR submitted. | Low | High | Medium |
JWKs | JWK (JSON Web Key) containing a public key used to validate JWTs for the client authentication use case considered in this threat model. If an attacker could compromise the integrity of a JWK or JSON web key set (JWKS), they could potentially authenticate to a service maliciously. Unavailability of an endpoint exposing JWKs could lead to client requests which require authentication being denied. | Low | High | Medium |
JWTs | JWTs, formatted as compact, URL-safe JSON data structures, are utilised for the client authentication use case considered in this threat model. Maintaining their confidentiality and integrity is vital to prevent unauthorised access and ensure correct user identification. | High | High | Low |
OIDC credentials | In OIDC authentication scenarios, the application credentials are represented by a client ID and a client secret. A compromise of its confidentiality or integrity could allow malicious actors to impersonate the application, potentially being able to access resources on behalf of the application and request ID tokens on behalf of users. Unavailability of this data would produce a rejection of the requests coming from legitimate users. | High | High | Medium |
Basic authentiation password hashes | In basic authentication scenarios, passwords are stored as Kubernetes secrets in htpasswd format, where each entry is formed by the username and the hashed password. A compromise of these credentials’ confidentiality and integrity could lead to unauthorised access to the application. Unavailability of these credentials will cause login failures from the application users. | High | High | Medium |
CIA Impact Assessment
Priority | Description |
---|---|
Confidentiality | |
High | Compromise of sensitive client data |
Medium | Information leaked which could be useful for attacker reconnaissance |
Low | Non-sensitive information leakage |
Integrity | |
High | Compromise of source code repositories and gateway deployments |
Medium | Traffic routing fails due to misconfiguration / invalid configuration |
Low | Non-critical operation is blocked due to misconfiguration / invalid configuration |
Availability | |
High | Large scale DoS |
Medium | Tenant application is blocked for a significant period |
Low | Tenant application is blocked for a short period |
Data Flow Diagrams
The Data Flow Diagrams (DFDs) below describe the flow of data between the various processes, entities and data stores in a system, as well as the trust boundaries between different user roles and network interfaces. The DFDs are drawn at two different levels, starting at L0 (high-level system view) and increasing in granularity (to L1).
DFD L0
DFD L1
Key Threats and Recommendations
The scope of this threat model led to us categorising threats into priorities of High, Medium or Low; notably in a production implementation some of the threats’ prioritisation may be upgraded or downgraded depending on the business context and data classification.
Risk vs. Threat
For every finding, the risk and threat are stated. Risk defines the potential for negative outcome while threat defines the event that causes the negative outcome.
Threat Categorization
Throughout this threat model, we categorised threats into different areas based on their origin and the segment of the system that they impact. Here’s an overview of each category:
Container Security (CS): These threats are general to containerised applications. Therefore, they are not associated with Envoy Gateway or the Gateway API and could occur in most containerised workloads. They can originate from misconfigurations or vulnerabilities in the orchestrator or the container.
Gateway API (GW): These are threats related to the Gateway API that could affect any of its implementations. Malicious actors could benefit from misconfigurations or excessive permissions on the Gateway API resources (e.g. xRoutes or Gateways) to compromise the confidentiality, integrity, or availability of the application.
Envoy Gateway (EG): These threats are associated with specific configurations or features from Envoy Gateway or Envoy Proxy. If not set properly, these features could be leveraged to gain unauthorised access to protected resources.
Threat Actors
In order to provide a realistic set of threats that is applicable to most organisations, we de-scoped the most advanced and hard to mitigate threat actors as described below:
In Scope Threat Actors
When considering internal threat actors, we chose to follow the security model of the Kubernetes Gateway API.
Internal Attacker
Cluster Operator: The cluster operator (ops) is responsible for administration of entire clusters. They manage policies, network access, application permissions.
Application Developer: The application developer (dev) is responsible for defining their application configuration (e.g. timeouts, request matching/filter) and Service composition (e.g. path routing to backends).
Application Administrator: The application admin has administrative access to some namespaces within a cluster, but not the cluster as a whole.
External Attacker
Vandal: Script kiddie, trespasser
Motivated Individual: Political activist, thief, terrorist
Organised Crime: Syndicates, state-affiliated groups
Out of Scope Threat Actors
External Actors
Infrastructure Provider: The infrastructure provider (infra) is responsible for the overall environment that the cluster(s) are operating in. Examples include: the cloud provider, or the PaaS provider in a company.
Cloud Service Insider: Employee, external contractor, temporary worker
Foreign Intelligence Services (FIS): Nation states
High Priority Findings
EGTM-001 Usage of self-signed certificates
ID | UID | Category | Priority |
---|---|---|---|
EGTM-001 | EGTM-GW-001 | Gateway API | High |
Risk: Self-signed certificates (which do not comply with PKI best practices) could lead to unauthorised access to the private key associated with the certificate used for inbound TLS termination at Envoy Proxy, compromising the confidentiality and integrity of proxied traffic.
Threat: Compromise of the private key associated with the certificate used for inbound TLS terminating at Envoy Proxy.
Recommendation: The Envoy Gateway quickstart demonstrates how to set up a Secure Gateway using an example where a self-signed root certificate is created using openssl. As stated in the Envoy Gateway documentation, this is not a suitable configuration for Production usage. It is recommended that PKI best practices are followed, whereby certificates are signed by an Intermediary CA which sits underneath an organisational 'offline' Root CA.
PKI best practices should also apply to the management of client certificates when using mTLS. The Envoy Gateway mTLS task shows how to set up client certificates using self-signed certificates. In the same way as gateway certificates and, as mentioned in the documentation, this configuration should not be used in production environments.
EGTM-002 Private keys are stored as Kubernetes secrets
ID | UID | Category | Priority |
---|---|---|---|
EGTM-002 | EGTM-CS-001 | Container Security | High |
Risk: There is a risk that a threat actor could compromise the Kubernetes secret containing the Envoy private key, allowing the attacker to decrypt Envoy Proxy traffic, compromising the confidentiality of proxied traffic.
Threat: Kubernetes secret containing the Envoy private key is compromised and used to decrypt proxied traffic.
Recommendation: Certificate management best practices mandate short-lived key material where practical, meaning that a mechanism for rotation of private keys and certificates is required, along with a way for certificates to be mounted into Envoy containers. If Kubernetes secrets are used, when a certificate expires, the associated secret must be updated, and Envoy containers must be redeployed. Instead of a manual configuration, it is recommended that cert-manager is used.
EGTM-004 Usage of ClusterRoles with wide permissions
ID | UID | Category | Priority |
---|---|---|---|
EGTM-004 | EGTM-K8-002 | Container Security | High |
Risk: There is a risk that a threat actor could abuse misconfigured RBAC to access the Envoy Gateway ClusterRole (envoy-gateway-role) and use it to expose all secrets across the cluster, thus compromising the confidentiality and integrity of tenant data.
Threat: Compromised Envoy Gateway or misconfigured ClusterRoleBinding (envoy-gateway-rolebinding) to Envoy Gateway ClusterRole (envoy-gateway-role), provides access to resources and secrets in different namespaces.
Recommendation: Users should be aware that Envoy Gateway uses a ClusterRole (envoy-gateway-role) when deployed via the Helm chart, to allow management of Envoy Proxies across different namespaces. This ClusterRole is powerful and includes the ability to read secrets in namespaces which may not be within the purview of Envoy Gateway.
Kubernetes best-practices involve restriction of ClusterRoleBindings, with the use of RoleBindings where possible to limit access per namespace by specifying the namespace in metadata. Namespace isolation reduces the impact of compromise from cluster-scoped roles. Ideally, fine-grained K8s roles should be created per the principle of least privilege to ensure they have the minimum access necessary for role functions.
The pull request #1656 introduced the use of Roles and RoleBindings in namespaced mode. This feature can be leveraged to reduce the amount of permissions required by the Envoy Gateway.
EGTM-007 Misconfiguration of Envoy Gateway dynamic config
ID | UID | Category | Priority |
---|---|---|---|
EGTM-007 | EGTM-EG-002 | Envoy Gateway | High |
Risk: There is a risk that a threat actor could exploit misconfigured Kubernetes RBAC to create or modify Gateway API resources with no business need, potentially leading to the compromise of the confidentiality, integrity, and availability of resources and traffic within the cluster.
Threat: Unauthorised creation or misconfiguration of Gateway API resources by a threat actor with cluster-scoped access.
Recommendation: Configure the apiGroup and resource fields in RBAC policies to restrict access to Gateway and GatewayClass resources. Enable namespace isolation by using the namespace field, preventing unauthorised access to gateways in other namespaces.
EGTM-009 Co-tenant misconfigures resource across namespaces
ID | UID | Category | Priority |
---|---|---|---|
EGTM-009 | EGTM-GW-002 | Gateway API | High |
Risk: There is a risk that a co-tenant misconfigures Gateway or Route resources, compromising the confidentiality, integrity, and availability of routed traffic through Envoy Gateway.
Threat: Malicious or accidental co-tenant misconfiguration of Gateways and Routes associated with other application teams.
Recommendation: Dedicated Envoy Gateways should be provided to each tenant within their respective namespace. A one-to-one relationship should be established between GatewayClass and Gateway resources, meaning that each tenant namespace should have their own GatewayClass watched by a unique Envoy Gateway Controller as defined here in the Deployment Mode documentation.
Application Admins should have write permissions on the Gateway resource, but only in their specific namespaces, and Application Developers should only hold write permissions on Route resources. To enact this access control schema, follow the Write Permissions for Advanced 4 Tier Model described in the Kubernetes Gateway API security model. Examples of secured gateway-route topologies can be found here within the Kubernetes Gateway API docs.
Optionally, consider a GitOps model, where only the GitOps operator has the permission to deploy or modify custom resources in production.
EGTM-014 Malicious image admission
ID | UID | Category | Priority |
---|---|---|---|
EGTM-014 | EGTM-CS-006 | Container Security | High |
Risk: There is a risk that a supply chain attack on Envoy Gateway results in an arbitrary compromise of the confidentiality, integrity or availability of tenant data.
Threat: Supply chain threat actor introduces malicious code into Envoy Gateway or Proxy.
Recommendation: The Envoy Gateway project should continue to work towards conformance with supply-chain security best practices throughout the project lifecycle (for example, as set out in the CNCF Software Supply Chain Best Practices Whitepaper). Adherence to Supply-chain Levels for Software Artefacts (SLSA) standards is crucial for maintaining the security of the system. Employ version control systems to monitor the source and build platforms and assign responsibility to a specific stakeholder.
Integrate a supply chain security tool such as Sigstore, which provides native capabilities for signing and verifying container images and software artefacts. Software Bill of Materials (SBOM), Vulnerability Exploitability eXchange (VEX), and signed artefacts should also be incorporated into the security protocol.
EGTM-020 Out of date or misconfigured Envoy Proxy image
ID | UID | Category | Priority |
---|---|---|---|
EGTM-020 | EGTM-CS-009 | Container Security | High |
Risk: There is a risk that a threat actor exploits an Envoy Proxy vulnerability to remote code execution (RCE) due to out of date or misconfigured Envoy Proxy pod deployment, compromising the confidentiality and integrity of Envoy Proxy along with the availability of the proxy service.
Threat: Deployment of an Envoy Proxy or Gateway image containing exploitable CVEs.
Recommendation: Always use the latest version of the Envoy Proxy image. Regularly check for updates and patch the system as soon as updates become available. Implement a CI/CD pipeline that includes security checks for images and prevents deployment of insecure configurations. A suitable tool should be chosen to provide container vulnerability scanning to mitigate the risk of known vulnerabilities.
Utilise the Pod Security Admission controller to enforce Pod Security Standards and configure the pod security context to limit its capabilities per the principle of least privilege.
EGTM-022 Credentials are stored as Kubernetes Secrets
ID | UID | Category | Priority |
---|---|---|---|
EGTM-022 | EGTM-CS-010 | Container Security | High |
Risk: There is a risk that the OIDC client secret (for OIDC authentication) and user password hashes (for basic authentication) get leaked due to misconfigured RBAC permissions.
Threat: Unauthorised access to the application due to credential leakage.
Recommendation: Ensure that only authorised users and service accounts are able to access secrets. This is especially important in namespaces where SecurityPolicy objects are configured, since those namespaces are the ones to store secrets containing the client secret (in OIDC scenarios) and user password hashes (in basic authentication scenarios).
To do so, minimise the use of ClusterRoles and Roles allowing listing and getting secrets. Perform periodic audits of RBAC permissions.
EGTM-023 Weak Authentication
ID | UID | Category | Priority |
---|---|---|---|
EGTM-023 | EGTM-EG-007 | Envoy Gateway | High |
Risk: There is a risk of unauthorised access due to the use of basic authentication, which does not enforce any password restriction in terms of complexity and length. In addition, password hashes are stored in SHA1 format, which is a deprecated hashing function.
Threat: Unauthorised access to the application due to weak authentication mechanisms.
Recommendation: It is recommended to make use of stronger authentication mechanisms (i.e. JWT authentication and OIDC authentication) instead of basic authentication. These authentication mechanisms have many advantages, such as the use of short-lived credentials and a central management of security policies through the identity provider.
Medium Priority Findings
EGTM-008 Misconfiguration of Envoy Gateway static config
ID | UID | Category | Priority |
---|---|---|---|
EGTM-008 | EGTM-EG-003 | Envoy Gateway | Medium |
Risk: There is a risk of a threat actor misconfiguring static config and compromising the integrity of Envoy Gateway, ultimately leading to the compromised confidentiality, integrity, or availability of tenant data and cluster resources.
Threat: Accidental or deliberate misconfiguration of static configuration leads to a misconfigured deployment of Envoy Gateway, for example logging parameters could be modified or global rate limiting configuration misconfigured.
Recommendation: Implement a GitOps model, utilising Kubernetes' Role-Based Access Control (RBAC) and adhering to the principle of least privilege to minimise human intervention on the cluster. For instance, tools like Flux and ArgoCD can be used for declarative GitOps deployments, ensuring all changes are tracked and reviewed. Additionally, configure your source control management (SCM) system to include mandatory pull request (PR) reviews, commit signing, and protected branches to ensure only authorised changes can be committed to the start-up configuration.
EGTM-010 Weak pod security contexts and policies
ID | UID | Category | Priority |
---|---|---|---|
EGTM-010 | EGTM-CS-005 | Container Security | Medium |
Risk: There is a risk that a threat actor exploits a weak pod security context, compromising the CIA of a node and the resources / services which run on it.
Threat: Threat Actor who has compromised a pod exploits weak security context to escape to a node, potentially leading to the compromise of Envoy Proxy or Gateway running on the same node.
Recommendation: To mitigate this risk, apply Pod Security Standards at a minimum of Baseline level to all namespaces, especially those containing Envoy Gateway and Proxy Pods. Pod security standards are implemented through K8s Pod Security Admission to provide admission control modes (enforce, audit, and warn) for namespaces. Pod security standards can be enforced by namespace labels as shown here, to enforce a baseline level of pod security to specific namespaces.
Further enhance the security by implementing a sandboxing solution such as gVisor for Envoy Gateway and Proxy Pods to isolate the application from the host kernel. This can be set within the runtimeClassName of the Pod specification.
EGTM-012 ClusterRoles and Roles with permission to deploy ReferenceGrants
ID | UID | Category | Priority |
---|---|---|---|
EGTM-012 | EGTM-GW-004 | Gateway API | Medium |
Risk: There is a risk that a threat actor could abuse excessive RBAC privileges to create ReferenceGrant resources. These resources could then be used to create cross-namespace communication, leading to unauthorised access to the application. This could compromise the confidentiality and integrity of resources and configuration in the affected namespaces and potentially disrupt the availability of services that rely on these object references.
Threat: A ReferenceGrant is created, which validates traffic to cross namespace trust boundaries without a valid business reason, such as a route in one tenant's namespace referencing a backend in another.
Recommendation: Ensure that the ability to create ReferenceGrant resources is restricted to the minimum number of people. Pay special attention to ClusterRoles that allow that action.
EGTM-018 Network Denial of Service (DoS)
ID | UID | Category | Priority |
---|---|---|---|
EGTM-018 | EGTM-GW-006 | Gateway API | Medium |
Risk: There is a risk that malicious requests could lead to a Denial of Service (DoS) attack, thereby reducing API gateway availability due to misconfigurations in rate-limiting or load balancing controls, or a lack of route timeout enforcement.
Threat: Reduced API gateway availability due to an attacker's maliciously crafted request (e.g., QoD) potentially inducing a Denial of Service (DoS) attack.
Recommendation: To ensure high availability and mitigate potential security threats, follow the guidelines in the Envoy Gateway documentation for configuring local rate limit filters, global rate limit filters, and load balancing.
Further, adhere to best practices for configuring Envoy Proxy as an edge proxy documented here within the EnvoyProxy docs. This involves configuring TCP and HTTP proxies with specific settings, including restricting access to the admin endpoint, setting the overload manager and listener / cluster buffer limits, enabling use_remote_address, setting connection and stream timeouts, limiting maximum concurrent streams, setting initial stream window size limit, and configuring action on headers_with_underscores.
Path normalisation should be enabled to minimise path confusion vulnerabilities. These measures help protect against volumetric threats such as Denial of Service (DoS) attacks. Utilise custom resources to implement policy attachment, thereby exposing request limit configuration for route types.
EGTM-019 JWT-based authentication replay attacks
ID | UID | Category | Priority |
---|---|---|---|
EGTM-019 | EGTM-DP-004 | Container Security | Medium |
Risk: There is a risk that replay attacks using stolen or reused JSON Web Tokens (JWTs) can compromise transmission integrity, thereby undermining the confidentiality and integrity of the data plane.
Threat: Transmission integrity is compromised due to replay attacks using stolen or reused JSON Web Tokens (JWTs).
Recommendation: Comply with JWT best practices for enhanced security, paying special attention to the use of short-lived tokens, which reduce the window of opportunity for a replay attack. The exp claim can be used to set token expiration times.
EGTM-024 Excessive privileges via extension policies
ID | UID | Category | Priority |
---|---|---|---|
EGTM-024 | EGTM-EG-008 | Envoy Gateway | Medium |
Risk: There is a risk of developers getting more privileges than required due to the use of SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy and BackendTrafficPolicy. These resources can be attached to a Gateway resource. Therefore, a developer with permission to deploy them would be able to modify a Gateway configuration by targeting the gateway in the policy manifest. This conflicts with the Advanced 4 Tier Model, where developers do not have write permissions on Gateways.
Threat: Excessive developer permissions lead to a misconfiguration and/or unauthorised access.
Recommendation: Considering the Tenant C scenario (represented in the Architecture Diagram), if a developer can create SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy or BackendTrafficPolicy objects in namespace C, they would be able to modify a Gateway configuration by attaching the policy to the gateway. In such scenarios, it is recommended to either:
a. Create a separate namespace, where developers have no permissions, to host tenant C's gateway. Note that, due to design decisions, the SecurityPolicy/EnvoyPatchPolicy/ClientTrafficPolicy/BackendTrafficPolicy object can only target resources deployed in the same namespace. Therefore, having a separate namespace for the gateway would prevent developers from attaching the policy to the gateway.
b. Forbid the creation of these policies for developers in namespace C.
On the other hand, in scenarios similar to tenants A and B, where a shared gateway namespace is in place, this issue is more limited. Note that in this scenario, developers don't have access to the shared gateway namespace.
In addition, it is important to mention that EnvoyPatchPolicy resources can also be attached to GatewayClass resources. This means that, in order to comply with the Advanced 4 Tier model, individuals with the Application Administrator role should not have access to this resource either.
Low Priority Findings
EGTM-003 Misconfiguration leads to insecure TLS settings
ID | UID | Category | Priority |
---|---|---|---|
EGTM-003 | EGTM-EG-001 | Envoy Gateway | Low |
Risk: There is a risk that a threat actor could downgrade the security of proxied connections by configuring a weak set of cipher suites, compromising the confidentiality and integrity of proxied traffic.
Threat: Exploit weak cipher suite configuration to downgrade security of proxied connections.
Recommendation: Users operating in highly regulated environments may need to tightly control the TLS protocol and associated cipher suites, blocking non-conforming incoming connections to the gateway.
EnvoyProxy bootstrap config can be customised as per the customise EnvoyProxy documentation. In addition, from v.1.0.0, it is possible to configure common TLS properties for a Gateway or XRoute through the ClientTrafficPolicy object.
EGTM-005 Envoy Gateway Helm chart deployment does not set AppArmor and Seccomp profiles
ID | UID | Category | Priority |
---|---|---|---|
EGTM-005 | EGTM-CP-002 | Container Security | Low |
Risk: Threat actor who has obtained access to Envoy Gateway pod could exploit the lack of AppArmor and Seccomp profiles in the Envoy Gateway deployment to attempt a container breakout, given the presence of an exploitable vulnerability, potentially impacting the confidentiality and integrity node resources.
Threat: Unauthorised syscalls and malicious code running in the Envoy Gateway pod.
Recommendation: Implement AppArmor policies by setting <container_name>: <profile_ref> within container.apparmor.security.beta.kubernetes.io (note, this config is set per container). Well-defined AppArmor policies may provide greater protection from unknown threats.
Enforce Seccomp profiles by setting the seccompProfile under securityContext. Ideally, a fine-grained profile should be used to restrict access to only necessary syscalls, however the --seccomp-default flag can be set to resort to RuntimeDefault which provides a container runtime specific. Example seccomp profiles can be found here.
To further enhance pod security, consider implementing SELinux via seLinuxOptions for additional syscall attack surface reduction. Setting readOnlyRootFilesystem == true enforces an immutable root filesystem, preventing the addition of malicious binaries to the PATH and increasing the attack cost. Together, these configuration items improve the pods Security Context.
EGTM-006 Envoy Proxy pods deployed with a shell enabled
ID | UID | Category | Priority |
---|---|---|---|
EGTM-006 | EGTM-CS-004 | Container Security | Low |
Risk: There is a risk that a threat actor exploits a vulnerability in Envoy Proxy to expose a reverse shell, enabling them to compromise the confidentiality, integrity and availability of tenant data via a secondary attack.
Threat: If an external attacker managed to exploit a vulnerability in Envoy, the presence of a shell would be greatly helpful for the attacker in terms of potentially pivoting, escalating, or establishing some form of persistence.
Recommendation: By default, Envoy uses a distroless image since v.0.6.0, which does not ship a shell. Therefore, ensure EnvoyProxy image is up-to-date and patched with the latest stable version.
If using private EnvoyProxy images, use a lightweight EnvoyProxy image without a shell or debugging tool(s) which may be useful for an attacker.
An AuditPolicy (audit.k8s.io/v1beta1) can be configured to record API calls made within your cluster, allowing for identification of malicious traffic and enabling incident response. Requests are recorded based on stages which delineate between the lifecycle stage of the request made (e.g., RequestReceived, ResponseStarted, & ResponseComplete).
EGTM-011 Route Bindings on custom labels
ID | UID | Category | Priority |
---|---|---|---|
EGTM-011 | EGTM-GW-003 | Gateway API | Low |
Risk: There is a risk that a gateway owner (or someone with the ability to set namespace labels) maliciously or accidentally binds routes across namespace boundaries, potentially compromising the confidentiality and integrity of traffic in a multitenant scenario.
Threat: If a Route Binding within a Gateway Listener is configured based on a custom label, it could allow a malicious internal actor with the ability to label namespaces to change the set of namespaces supported by the Gateway.
Recommendation: Consider the use of custom admission control to restrict what labels can be set on namespaces through tooling such as Kubewarden, Kyverno, and OPA Gatekeeper. Route binding should follow the Kubernetes Gateway API security model, as shown here, to connect gateways in different namespaces.
EGTM-013 GatewayClass namespace validation is not configured
ID | UID | Category | Priority |
---|---|---|---|
EGTM-013 | EGTM-GW-005 | Gateway API | Low |
Risk: There is a risk that an unauthorised actor deploys an unauthorised GatewayClass due to GatewayClass namespace validation not being configured, leading to non-compliance with business and security requirements.
Threat: Unauthorised deployment of Gateway resource via GatewayClass template which crosses namespace trust boundaries.
Recommendation: Leverage GatewayClass namespace validation to limit the namespaces where GatewayClasses can be run through a tool such as OPA Gatekeeper. Reference pull request #24 within gatekeeper-library which outlines how to add GatewayClass namespace validation through a GatewayClassNamespaces API resource kind within the constraints.gatekeeper.sh/v1beta1 apiGroup.
EGTM-015 ServiceAccount token authentication
ID | UID | Category | Priority |
---|---|---|---|
EGTM-015 | EGTM-CS-007 | Container Security | Low |
Risk: There is a risk that threat actors could exploit ServiceAccount tokens for illegitimate authentication, thereby leading to privilege escalation and the undermining of gateway API resources' integrity, confidentiality, and availability.
Threat: The threat arises from threat actors impersonating the envoy-gateway ServiceAccount through the replay of ServiceAccount tokens, thereby achieving escalated privileges and gaining unauthorised access to Kubernetes resources.
Recommendation: Limit the creation of ServiceAccounts to only when necessary, specifically refraining from using default service account tokens, especially for high-privilege service accounts. For legacy clusters running Kubernetes version 1.21 or earlier, note that ServiceAccount tokens are long-lived by default. To disable the automatic mounting of the service account token, set automountServiceAccountToken: false in the PodSpec.
EGTM-016 Misconfiguration leads to lack of Envoy Proxy access activity visibility
ID | UID | Category | Priority |
---|---|---|---|
EGTM-016 | EGTM-EG-004 | Envoy Gateway | Low |
Risk: There is a risk that threat actors establish persistence and move laterally through the cluster unnoticed due to limited visibility into access and application-level activity.
Threat: Threat actors establish persistence and move laterally through the cluster unnoticed.
Recommendation: Configure access logging in the EnvoyProxy. Use ProxyAccessLogFormatType (Text or JSON) to specify the log format and ensure that the logs are sent to the desired sink types by setting the ProxyAccessLogSinkType. Make use of FileEnvoyProxyAccessLog or OpenTelemetryEnvoyProxyAccessLog to configure File and OpenTelemetry sinks, respectively. If the settings aren't defined, the default format is sent to stdout.
Additionally, consider leveraging a central logging mechanism such as Fluentd to enhance visibility into access activity and enable effective incident response (IR).
EGTM-017 Misconfiguration leads to lack of Envoy Gateway activity visibility
ID | UID | Category | Priority |
---|---|---|---|
EGTM-017 | EGTM-EG-005 | Envoy Gateway | Low |
Risk: There is a risk that an insider misconfigures an envoy gateway component and goes unnoticed due to a low-touch logging configuration (via default) which responsible stakeholders are not aptly aware of or have immediate access to.
Threat: The threat emerges from an insider misconfiguring an Envoy Gateway component without detection.
Recommendation: Configure the logging level of the Envoy Gateway using the 'level' field in EnvoyGatewayLogging. Ensure the appropriate logging levels are set for relevant components such as 'gateway-api', 'xds-translator', or 'global-ratelimit'. If left unspecified, the logging level defaults to "info", which may not provide sufficient detail for security monitoring.
Employ a centralised logging mechanism, like Fluentd, to enhance visibility into application-level activity and to enable efficient incident response.
EGTM-021 Exposed Envoy Proxy admin interface
ID | UID | Category | Priority |
---|---|---|---|
EGTM-021 | EGTM-EG-006 | Envoy Gateway | Low |
Risk: There is a risk that the admin interface is exposed without valid business reason, increasing the attack surface.
Threat: Exposed admin interfaces give internal attackers the option to affect production traffic in unauthorised ways, and the option to exploit any vulnerabilities which may be present in the admin interface (e.g. by orchestrating malicious GET requests to the admin interface through CSRF, compromising Envoy Proxy global configuration or shutting off the service entirely e.g. /quitquitquit).
Recommendation: The Envoy Proxy admin interface is only exposed to localhost, meaning that it is secure by default. However, due to the risk of misconfiguration, this recommendation is included.
Due to the importance of the admin interface, it is recommended to ensure that Envoy Proxies have not been accidentally misconfigured to expose the admin interface to untrusted networks.
EGTM-025 Envoy Proxy pods deployed running as root user in the container
ID | UID | Category | Priority |
---|---|---|---|
EGTM-025 | EGTM-CS-011 | Container Security | Low |
Risk: The presence of a vulnerability, be it in the kernel or another system component, when coupled with containers running as root, could enable a threat actor to escape the container, thereby compromising the confidentiality, integrity, or availability of cluster resources
Threat: The Envoy Proxy container’s root-user configuration can be leveraged by an attacker to escalate privileges, execute a container breakout, and traverse across trust boundaries.
Recommendation: By default, Envoy Gateway deployments do not use root users. Nonetheless, in case a custom image or deployment manifest is to be used, make sure Envoy Proxy pods run as a non-root user with a high UID within the container.
Set runAsUser and runAsGroup security context options to specific UIDs (e.g., runAsUser: 1000 & runAsGroup: 3000) to ensure the container operates with the stipulated non-root user and group ID. If using helm chart deployment, define the user and group ID in the values.yaml file or via the command line during helm install / upgrade.
Appendix
In Scope Threat Actor Details
Threat Actor | Capability | Personal Motivation | Envoy Gateway Attack Samples |
---|---|---|---|
Application Developer | Leverage internal knowledge and personal access to the Envoy Gateway infrastructure to move laterally and transit trust boundaries | Disgruntled / personal grievances. Financial incentives | Misconfigure XRoute resources to expose internal applications. Misconfigure SecurityPolicy objects, reducing the security posture of an application. |
Application Administrator | Abuse privileged status to disrupt operations and tenant cluster services through Envoy Gateway misconfig | Disgruntled / personal grievances. Financial incentives | Create malicious routes to internal applications. Introduce malicious Envoy Proxy images. Expose the Envoy Proxy Admin interface. |
Cluster Operator | Alter application-level deployments by misconfiguring resource dependencies & SCM to introduce vulnerabilities | Disgruntled / personal grievances. Financial incentives. Notoriety | Deploy malicious resources to expose internal applications. Access authentication secrets. Fall victim to phishing attacks and inadvertently share authentication credentials to cloud infrastructure or Kubernetes clusters. |
Vandal: Script Kiddie, Trespasser | Uses publicly available tools and applications (Nmap,Metasploit, CVE PoCs) | Curiosity. Personal fame through defacement / denial of service of prominent public facing web resources | Small scale DOS. Launches prepackaged exploits, runs crypto mining tools. Exploit public-facing application services such as the bastion host to gain an initial foothold in the environment |
Motivated individual: Political activist, Thief, Terrorist | Write tools and exploits required for their means if sufficiently motivated. Tend to use these in a targeted fashion against specific organisations. May combine publicly available exploits in a targeted fashion. Tamper with open source supply chains | Personal Gain (Political or Ideological) | Phishing, DDOS, exploit known vulnerabilities. Compromise third-party components such as Helm charts and container images to inject malicious codes to propagate access throughout the environment. |
Organised crime: syndicates, state-affiliated groups | Write tools and exploits required for their means. Tend to use these in a non-targeted fashion, unless motivation is sufficiently high. Devotes considerable resources, writes exploits, can bribe/coerce, can launch targeted attacks | Ransom. Mass extraction of PII / credentials / PCI data. Financial incentives | Social Engineering, phishing, ransomware, coordinated attacks. Intercept and replay JWT tokens (via MiTM) between tenant user(s) and envoy gateway to modify app configs in-transit |
Identified Threats by Priority
ID | UID | Category | Risk | Threat | Priority | Recommendation |
---|---|---|---|---|---|---|
EGTM-001 | EGTM-GW-001 | Gateway API | Self-signed certificates (which do not comply with PKI best practices) could lead to unauthorised access to the private key associated with the certificate used for inbound TLS termination at Envoy Proxy, compromising the confidentiality and integrity of proxied traffic. | Compromise of the private key associated with the certificate used for inbound TLS terminating at Envoy Proxy. | High | The Envoy Gateway quickstart demonstrates how to set up a Secure Gateway using an example where a self-signed root certificate is created using openssl. As stated in the Envoy Gateway documentation, this is not a suitable configuration for Production usage. It is recommended that PKI best practices are followed, whereby certificates are signed by an Intermediary CA which sits underneath an organisational 'offline' Root CA. PKI best practices should also apply to the management of client certificates when using mTLS. The Envoy Gateway mTLS task shows how to set up client certificates using self-signed certificates. In the same way as gateway certificates and, as mentioned in the documentation, this configuration should not be used in production environments. |
EGTM-002 | EGTM-CS-001 | Container Security | There is a risk that a threat actor could compromise the Kubernetes secret containing the Envoy private key, allowing the attacker to decrypt Envoy Proxy traffic, compromising the confidentiality of proxied traffic. | Kubernetes secret containing the Envoy private key is compromised and used to decrypt proxied traffic. | High | Certificate management best practices mandate short-lived key material where practical, meaning that a mechanism for rotation of private keys and certificates is required, along with a way for certificates to be mounted into Envoy containers. If Kubernetes secrets are used, when a certificate expires, the associated secret must be updated, and Envoy containers must be redeployed. Instead of a manual configuration, it is recommended that cert-manager is used. |
EGTM-004 | EGTM-K8-002 | Container Security | There is a risk that a threat actor could abuse misconfigured RBAC to access the Envoy Gateway ClusterRole (envoy-gateway-role) and use it to expose all secrets across the cluster, thus compromising the confidentiality and integrity of tenant data. | Compromised Envoy Gateway or misconfigured ClusterRoleBinding (envoy-gateway-rolebinding) to Envoy Gateway ClusterRole (envoy-gateway-role), provides access to resources and secrets in different namespaces. | High | Users should be aware that Envoy Gateway uses a ClusterRole (envoy-gateway-role) when deployed via the Helm chart, to allow management of Envoy Proxies across different namespaces. This ClusterRole is powerful and includes the ability to read secrets in namespaces which may not be within the purview of Envoy Gateway. Kubernetes best-practices involve restriction of ClusterRoleBindings, with the use of RoleBindings where possible to limit access per namespace by specifying the namespace in metadata. Namespace isolation reduces the impact of compromise from cluster-scoped roles. Ideally, fine-grained K8s roles should be created per the principle of least privilege to ensure they have the minimum access necessary for role functions. The pull request #1656 introduced the use of Roles and RoleBindings in namespaced mode. This feature can be leveraged to reduce the amount of permissions required by the Envoy Gateway. |
EGTM-007 | EGTM-EG-002 | Envoy Gateway | There is a risk that a threat actor could exploit misconfigured Kubernetes RBAC to create or modify Gateway API resources with no business need, potentially leading to the compromise of the confidentiality, integrity, and availability of resources and traffic within the cluster. | Unauthorised creation or misconfiguration of Gateway API resources by a threat actor with cluster-scoped access. | High | Configure the apiGroup and resource fields in RBAC policies to restrict access to Gateway and GatewayClass resources. Enable namespace isolation by using the namespace field, preventing unauthorised access to gateways in other namespaces. |
EGTM-009 | EGTM-GW-002 | Gateway API | There is a risk that a co-tenant misconfigures Gateway or Route resources, compromising the confidentiality, integrity, and availability of routed traffic through Envoy Gateway. | Malicious or accidental co-tenant misconfiguration of Gateways and Routes associated with other application teams. | High | Dedicated Envoy Gateways should be provided to each tenant within their respective namespace. A one-to-one relationship should be established between GatewayClass and Gateway resources, meaning that each tenant namespace should have their own GatewayClass watched by a unique Envoy Gateway Controller as defined here in the Deployment Mode documentation. Application Admins should have write permissions on the Gateway resource, but only in their specific namespaces, and Application Developers should only hold write permissions on Route resources. To enact this access control schema, follow the Write Permissions for Advanced 4 Tier Model described in the Kubernetes Gateway API security model. Examples of secured gateway-route topologies can be found here within the Kubernetes Gateway API docs. Optionally, consider a GitOps model, where only the GitOps operator has the permission to deploy or modify custom resources in production. |
EGTM-014 | EGTM-CS-006 | Container Security | There is a risk that a supply chain attack on Envoy Gateway results in an arbitrary compromise of the confidentiality, integrity or availability of tenant data. | Supply chain threat actor introduces malicious code into Envoy Gateway or Proxy. | High | The Envoy Gateway project should continue to work towards conformance with supply-chain security best practices throughout the project lifecycle (for example, as set out in the CNCF Software Supply Chain Best Practices Whitepaper. Adherence to Supply-chain Levels for Software Artefacts (SLSA) standards is crucial for maintaining the security of the system. Employ version control systems to monitor the source and build platforms and assign responsibility to a specific stakeholder. Integrate a supply chain security tool such as Sigstore, which provides native capabilities for signing and verifying container images and software artefacts. Software Bill of Materials (SBOM), Vulnerability Exploitability eXchange (VEX), and signed artefacts should also be incorporated into the security protocol. |
EGTM-020 | EGTM-CS-009 | Container Security | There is a risk that a threat actor exploits an Envoy Proxy vulnerability to remote code execution (RCE) due to out of date or misconfigured Envoy Proxy pod deployment, compromising the confidentiality and integrity of Envoy Proxy along with the availability of the proxy service. | Deployment of an Envoy Proxy or Gateway image containing exploitable CVEs. | High | Always use the latest version of the Envoy Proxy image. Regularly check for updates and patch the system as soon as updates become available. Implement a CI/CD pipeline that includes security checks for images and prevents deployment of insecure configurations. A tool such as Snyk can be used to provide container vulnerability scanning to mitigate the risk of known vulnerabilities. Utilise the Pod Security Admission controller to enforce Pod Security Standards and configure the pod security context to limit its capabilities per the principle of least privilege. |
EGTM-022 | EGTM-CS-010 | Container Security | There is a risk that the OIDC client secret (for OIDC authentication) and user password hashes (for basic authentication) get leaked due to misconfigured RBAC permissions. | Unauthorised access to the application due to credential leakage. | High | Ensure that only authorised users and service accounts are able to access secrets. This is especially important in namespaces where SecurityPolicy objects are configured, since those namespaces are the ones to store secrets containing the client secret (in OIDC scenarios) and user password hashes (in basic authentication scenarios). To do so, minimise the use of ClusterRoles and Roles allowing listing and getting secrets. Perform periodic audits of RBAC permissions. |
EGTM-023 | EGTM-EG-007 | Envoy Gateway | There is a risk of unauthorised access due to the use of basic authentication, which does not enforce any password restriction in terms of complexity and length. In addition, password hashes are stored in SHA1 format, which is a deprecated hashing function. | Unauthorised access to the application due to weak authentication mechanisms. | High | It is recommended to make use of stronger authentication mechanisms (i.e. JWT authentication and OIDC authentication) instead of basic authentication. These authentication mechanisms have many advantages, such as the use of short-lived credentials and a central management of security policies through the identity provider. |
EGTM-008 | EGTM-EG-003 | Envoy Gateway | There is a risk of a threat actor misconfiguring static config and compromising the integrity of Envoy Gateway, ultimately leading to the compromised confidentiality, integrity, or availability of tenant data and cluster resources. | Accidental or deliberate misconfiguration of static configuration leads to a misconfigured deployment of Envoy Gateway, for example logging parameters could be modified or global rate limiting configuration misconfigured. | Medium | Implement a GitOps model, utilising Kubernetes' Role-Based Access Control (RBAC) and adhering to the principle of least privilege to minimise human intervention on the cluster. For instance, tools like ArgoCD can be used for declarative GitOps deployments, ensuring all changes are tracked and reviewed. Additionally, configure your source control management (SCM) system to include mandatory pull request (PR) reviews, commit signing, and protected branches to ensure only authorised changes can be committed to the start-up configuration. |
EGTM-010 | EGTM-CS-005 | Container Security | There is a risk that a threat actor exploits a weak pod security context, compromising the CIA of a node and the resources / services which run on it. | Threat Actor who has compromised a pod exploits weak security context to escape to a node, potentially leading to the compromise of Envoy Proxy or Gateway running on the same node. | Medium | To mitigate this risk, apply Pod Security Standards at a minimum of Baseline level to all namespaces, especially those containing Envoy Gateway and Proxy Pods. Pod security standards are implemented through K8s Pod Security Admission to provide admission control modes (enforce, audit, and warn) for namespaces. Pod security standards can be enforced by namespace labels as shown here, to enforce a baseline level of pod security to specific namespaces. Further enhance the security by implementing a sandboxing solution such as gVisor for Envoy Gateway and Proxy Pods to isolate the application from the host kernel. This can be set within the runtimeClassName of the Pod specification. |
EGTM-012 | EGTM-GW-004 | Gateway API | There is a risk that a threat actor could abuse excessive RBAC privileges to create ReferenceGrant resources. These resources could then be used to create cross-namespace communication, leading to unauthorised access to the application. This could compromise the confidentiality and integrity of resources and configuration in the affected namespaces and potentially disrupt the availability of services that rely on these object references. | A ReferenceGrant is created, which validates traffic to cross namespace trust boundaries without a valid business reason, such as a route in one tenant's namespace referencing a backend in another. | Medium | Ensure that the ability to create ReferenceGrant resources is restricted to the minimum number of people. Pay special attention to ClusterRoles that allow that action. |
EGTM-018 | EGTM-GW-006 | Gateway API | There is a risk that malicious requests could lead to a Denial of Service (DoS) attack, thereby reducing API gateway availability due to misconfigurations in rate-limiting or load balancing controls, or a lack of route timeout enforcement. | Reduced API gateway availability due to an attacker's maliciously crafted request (e.g., QoD) potentially inducing a Denial of Service (DoS) attack. | Medium | To ensure high availability and to mitigate potential security threats, adhere to the Envoy Gateway documentation for the configuration of a rate-limiting filter and load balancing. Further, adhere to best practices for configuring Envoy Proxy as an edge proxy documented here within the EnvoyProxy docs. This involves configuring TCP and HTTP proxies with specific settings, including restricting access to the admin endpoint, setting the overload manager and listener / cluster buffer limits, enabling use_remote_address, setting connection and stream timeouts, limiting maximum concurrent streams, setting initial stream window size limit, and configuring action on headers_with_underscores. Path normalisation should be enabled to minimise path confusion vulnerabilities. These measures help protect against volumetric threats such as Denial of Service (DoS)nattacks. Utilise custom resources to implement policy attachment, thereby exposing request limit configuration for route types. |
EGTM-019 | EGTM-DP-004 | Container Security | There is a risk that replay attacks using stolen or reused JSON Web Tokens (JWTs) can compromise transmission integrity, thereby undermining the confidentiality and integrity of the data plane. | Transmission integrity is compromised due to replay attacks using stolen or reused JSON Web Tokens (JWTs). | Medium | Comply with JWT best practices for enhanced security, paying special attention to the use of short-lived tokens, which reduce the window of opportunity for a replay attack. The exp claim can be used to set token expiration times. |
EGTM-024 | EGTM-EG-008 | Envoy Gateway | There is a risk of developers getting more privileges than required due to the use of SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy and BackendTrafficPolicy. These resources can be attached to a Gateway resource. Therefore, a developer with permission to deploy them would be able to modify a Gateway configuration by targeting the gateway in the policy manifest. This conflicts with the Advanced 4 Tier Model, where developers do not have write permissions on Gateways. | Excessive developer permissions lead to a misconfiguration and/or unauthorised access. | Medium | Considering the Tenant C scenario (represented in the Architecture Diagram), if a developer can create SecurityPolicy, ClientTrafficPolicy, EnvoyPatchPolicy or BackendTrafficPolicy objects in namespace C, they would be able to modify a Gateway configuration by attaching the policy to the gateway. In such scenarios, it is recommended to either: a. Create a separate namespace, where developers have no permissions, > to host tenant C's gateway. Note that, due to design decisions, > the > SecurityPolicy/EnvoyPatchPolicy/ClientTrafficPolicy/BackendTrafficPolicy > object can only target resources deployed in the same namespace. > Therefore, having a separate namespace for the gateway would > prevent developers from attaching the policy to the gateway. b. Forbid the creation of these policies for developers in namespace C. On the other hand, in scenarios similar to tenants A and B, where a shared gateway namespace is in place, this issue is more limited. Note that in this scenario, developers don't have access to the shared gateway namespace. In addition, it is important to mention that EnvoyPatchPolicy resources can also be attached to GatewayClass resources. This means that, in order to comply with the Advanced 4 Tier model, individuals with the Application Administrator role should not have access to this resource either. |
EGTM-003 | EGTM-EG-001 | Envoy Gateway | There is a risk that a threat actor could downgrade the security of proxied connections by configuring a weak set of cipher suites, compromising the confidentiality and integrity of proxied traffic. | Exploit weak cipher suite configuration to downgrade security of proxied connections. | Low | Users operating in highly regulated environments may need to tightly control the TLS protocol and associated cipher suites, blocking non-conforming incoming connections to the gateway. EnvoyProxy bootstrap config can be customised as per the customise EnvoyProxy documentation. In addition, from v.1.0.0, it is possible to configure common TLS properties for a Gateway or XRoute through the ClientTrafficPolicy object. |
EGTM-005 | EGTM-CP-002 | Container Security | Threat actor who has obtained access to Envoy Gateway pod could exploit the lack of AppArmor and Seccomp profiles in the Envoy Gateway deployment to attempt a container breakout, given the presence of an exploitable vulnerability, potentially impacting the confidentiality and integrity of namespace resources. | Unauthorised syscalls and malicious code running in the Envoy Gateway pod. | Low | Implement AppArmor policies by setting <container_name>: <profile_ref> within container.apparmor.security.beta.kubernetes.io (note, this config is set per container). Well-defined AppArmor policies may provide greater protection from unknown threats. Enforce Seccomp profiles by setting the seccompProfile under securityContext. Ideally, a fine-grained profile should be used to restrict access to only necessary syscalls, however the --seccomp-default flag can be set to resort to RuntimeDefault which provides a container runtime specific. Example seccomp profiles can be found here. To further enhance pod security, consider implementing SELinux via seLinuxOptions for additional syscall attack surface reduction. Setting readOnlyRootFilesystem == true enforces an immutable root filesystem, preventing the addition of malicious binaries to the PATH and increasing the attack cost. Together, these configuration items improve the pods Security Context. |
EGTM-006 | EGTM-CS-004 | Container Security | There is a risk that a threat actor exploits a vulnerability in Envoy Proxy to expose a reverse shell, enabling them to compromise the confidentiality, integrity and availability of tenant data via a secondary attack. | If an external attacker managed to exploit a vulnerability in Envoy, the presence of a shell would be greatly helpful for the attacker in terms of potentially pivoting, escalating, or establishing some form of persistence. | Low | By default, Envoy uses a distroless image since v.0.6.0, which does not ship a shell. Therefore, ensure EnvoyProxy image is up-to-date and patched with the latest stable version. If using private EnvoyProxy images, use a lightweight EnvoyProxy image without a shell or debugging tool(s) which may be useful for an attacker. An AuditPolicy (audit.k8s.io/v1beta1) can be configured to record API calls made within your cluster, allowing for identification of malicious traffic and enabling incident response. Requests are recorded based on stages which delineate between the lifecycle stage of the request made (e.g., RequestReceived, ResponseStarted, & ResponseComplete). |
EGTM-011 | EGTM-GW-003 | Gateway API | There is a risk that a gateway owner (or someone with the ability to set namespace labels) maliciously or accidentally binds routes across namespace boundaries, potentially compromising the confidentiality and integrity of traffic in a multitenant scenario. | If a Route Binding within a Gateway Listener is configured based on a custom label, it could allow a malicious internal actor with the ability to label namespaces to change the set of namespaces supported by the Gateway | Low | Consider the use of custom admission control to restrict what labels can be set on namespaces through tooling such as Kubewarden, Kyverno, and OPA Gatekeeper. Route binding should follow the Kubernetes Gateway API security model, as shown here, to connect gateways in different namespaces. |
EGTM-013 | EGTM-GW-005 | Gateway API | There is a risk that an unauthorised actor deploys an unauthorised GatewayClass due to GatewayClass namespace validation not being configured, leading to non-compliance with business and security requirements. | Unauthorised deployment of Gateway resource via GatewayClass template which crosses namespace trust boundaries. | Low | Leverage GatewayClass namespace validation to limit the namespaces where GatewayClasses can be run through a tool such as using OPA Gatekeeper. Reference pull request #24 within gatekeeper-library which outlines how to add GatewayClass namespace validation through a GatewayClassNamespaces API resource kind within the constraints.gatekeeper.sh/v1beta1 apiGroup. |
EGTM-015 | EGTM-CS-007 | Container Security | There is a risk that threat actors could exploit ServiceAccount tokens for illegitimate authentication, thereby leading to privilege escalation and the undermining of gateway API resources' integrity, confidentiality, and availability. | The threat arises from threat actors impersonating the envoy-gateway ServiceAccount through the replay of ServiceAccount tokens, thereby achieving escalated privileges and gaining unauthorised access to Kubernetes resources. | Low | Limit the creation of ServiceAccounts to only when necessary, specifically refraining from using default service account tokens, especially for high-privilege service accounts. For legacy clusters running Kubernetes version 1.21 or earlier, note that ServiceAccount tokens are long-lived by default. To disable the automatic mounting of the service account token, set automountServiceAccountToken: false in the PodSpec. |
EGTM-016 | EGTM-EG-004 | Envoy Gateway | There is a risk that threat actors establish persistence and move laterally through the cluster unnoticed due to limited visibility into access and application-level activity. | Threat actors establish persistence and move laterally through the cluster unnoticed. | Low | Configure access logging in the EnvoyProxy. Use ProxyAccessLogFormatType (Text or JSON) to specify the log format and ensure that the logs are sent to the desired sink types by setting the ProxyAccessLogSinkType. Make use of FileEnvoyProxyAccessLog or OpenTelemetryEnvoyProxyAccessLog to configure File and OpenTelemetry sinks, respectively. If the settings aren't defined, the default format is sent to stdout. Additionally, consider leveraging a central logging mechanism such as Fluentd to enhance visibility into access activity and enable effective incident response (IR). |
EGTM-017 | EGTM-EG-005 | Envoy Gateway | There is a risk that an insider misconfigures an envoy gateway component and goes unnoticed due to a low-touch logging configuration (via default) which responsible stakeholders are not aptly aware of or have immediate access to. | The threat emerges from an insider misconfiguring an Envoy Gateway component without detection. | Low | Configure the logging level of the Envoy Gateway using the 'level' field in EnvoyGatewayLogging. Ensure the appropriate logging levels are set for relevant components such as 'gateway-api', 'xds-translator', or 'global-ratelimit'. If left unspecified, the logging level defaults to "info", which may not provide sufficient detail for security monitoring. Employ a centralised logging mechanism, like Fluentd, to enhance visibility into application-level activity and to enable efficient incident response. |
EGTM-021 | EGTM-EG-006 | Envoy Gateway | There is a risk that the admin interface is exposed without valid business reason, increasing the attack surface. | Exposed admin interfaces give internal attackers the option to affect production traffic in unauthorised ways, and the option to exploit any vulnerabilities which may be present in the admin interface (e.g. by orchestrating malicious GET requests to the admin interface through CSRF, compromising Envoy Proxy global configuration or shutting off the service entirely (e.g., /quitquitquit). | Low | The Envoy Proxy admin interface is only exposed to localhost, meaning that it is secure by default. However, due to the risk of misconfiguration, this recommendation is included. Due to the importance of the admin interface, it is recommended to ensure that Envoy Proxies have not been accidentally misconfigured to expose the admin interface to untrusted networks. |
EGTM-025 | EGTM-CS-011 | Container Security | The presence of a vulnerability, be it in the kernel or another system component, when coupled with containers running as root, could enable a threat actor to escape the container, thereby compromising the confidentiality, integrity, or availability of cluster resources. | The Envoy Proxy container’s root-user configuration can be leveraged by an attacker to escalate privileges, execute a container breakout, and traverse across trust boundaries. | Low | By default, Envoy Gateway deployments do not use root users. Nonetheless, in case a custom image or deployment manifest is to be used, make sure Envoy Proxy pods run as a non-root user with a high UID within the container. Set runAsUser and runAsGroup security context options to specific UIDs (e.g., runAsUser: 1000 & runAsGroup: 3000) to ensure the container operates with the stipulated non-root user and group ID. If using helm chart deployment, define the user and group ID in the values.yaml file or via the command line during helm install / upgrade. |
Attack Trees
Attack trees offer a methodical way of describing the security of systems, based on varying attack patterns. It’s important to approach the review of attack trees from a top-down perspective. The top node, also known as the root node, symbolises the attacker’s primary objective. This goal is then broken down into subsidiary aims, each reflecting a different strategy to attain the root objective. This deconstruction persists until reaching the lowest level objectives or ’leaf nodes’, which depict attacks that can be directly launched.
It is essential to note that attack trees presented here are speculative paths for potential exploitation. The Envoy Gateway project is in a continuous development cycle, and as the project evolves, new vulnerabilities may be exposed, or additional controls could be introduced. Therefore, the threats illustrated in the attack trees should be perceived as point-in-time reflections of the project’s current state at the time of writing this threat model.
Node ID Schema
Each node in the attack tree is assigned a unique identifier following the AT#-## schema. This allows easy reference to specific nodes in the attack trees throughout the threat model. The first part of the ID (AT#) signifies the attack tree number, while the second part (##) represents the node number within that tree.
Logical Operators
Logical AND/OR operators are used to represent the relationship between parent and child nodes. An AND operator means that all child nodes must be achieved to satisfy the parent node. An OR operator between a parent node and its child nodes means that any of the child nodes can be achieved to satisfy the parent node.
Attack Tree Node Legend
AT0
AT1
AT2
AT3
AT4
AT5
AT6
14 - TLS Passthrough
This task will walk through the steps required to configure TLS Passthrough via Envoy Gateway. Unlike configuring Secure Gateways, where the Gateway terminates the client TLS connection, TLS Passthrough allows the application itself to terminate the TLS connection, while the Gateway routes the requests to the application based on SNI headers.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
TLS Certificates
Generate the certificates and keys used by the Service to terminate client TLS connections. For the application, we’ll deploy a sample echoserver app, with the certificates loaded in the application Pod.
Note: These certificates will not be used by the Gateway, but will remain in the application scope.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for passthrough.example.com
:
openssl req -out passthrough.example.com.csr -newkey rsa:2048 -nodes -keyout passthrough.example.com.key -subj "/CN=passthrough.example.com/O=some organization"
openssl x509 -req -sha256 -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in passthrough.example.com.csr -out passthrough.example.com.crt
Store the cert/keys in A Secret:
kubectl create secret tls server-certs --key=passthrough.example.com.key --cert=passthrough.example.com.crt
Deployment
Deploy TLS Passthrough application Deployment, Service and TLSRoute:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/tls-passthrough.yaml
Patch the Gateway from the Quickstart to include a TLS listener that listens on port 6443
and is configured for
TLS mode Passthrough:
kubectl patch gateway eg --type=json --patch '
- op: add
path: /spec/listeners/-
value:
name: tls
protocol: TLS
hostname: passthrough.example.com
port: 6443
tls:
mode: Passthrough
'
Testing
You can also test the same functionality by sending traffic to the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Curl the example app through the Gateway, e.g. Envoy proxy:
curl -v -HHost:passthrough.example.com --resolve "passthrough.example.com:6443:${GATEWAY_HOST}" \
--cacert example.com.crt https://passthrough.example.com:6443/get
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 6043:6443 &
Curl the example app through Envoy proxy:
curl -v --resolve "passthrough.example.com:6043:127.0.0.1" https://passthrough.example.com:6043 \
--cacert passthrough.example.com.crt
Clean-Up
Follow the steps from the Quickstart to uninstall Envoy Gateway and the example manifest.
Delete the Secret:
kubectl delete secret/server-certs
Next Steps
Checkout the Developer Guide to get involved in the project.
15 - TLS Termination for TCP
This task will walk through the steps required to configure TLS Terminate mode for TCP traffic via Envoy Gateway. This task uses a self-signed CA, so it should be used for testing and demonstration purposes only.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
TLS Certificates
Generate the certificates and keys used by the Gateway to terminate client TLS connections.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for www.example.com
:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt
Install the TLS Termination for TCP example resources:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/latest/examples/kubernetes/tls-termination.yaml
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Testing
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Query the example app through the Gateway:
curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &
Query the example app through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get
16 - Using cert-manager For TLS Termination
This task shows how to set up cert-manager to automatically create certificates and secrets for use by Envoy Gateway. It will first show how to enable the self-sign issuer, which is useful to test that cert-manager and Envoy Gateway can talk to each other. Then it shows how to use Let’s Encrypt’s staging environment. Changing to the Let’s Encrypt production environment is straight-forward after that.
Prerequisites
- A Kubernetes cluster and a configured
kubectl
. - The
helm
command. - The
curl
command or similar for testing HTTPS requests. - For the ACME HTTP-01 challenge to work
- your Gateway must be reachable on the public Internet.
- the domain name you use (we use
www.example.com
) must point to the Gateway’s external IP(s).
Installation
Follow the steps from the Quickstart task to install Envoy Gateway and the example manifest. Before proceeding, you should be able to query the example backend using HTTP.
Verify the Gateway status:
kubectl get gateway/eg -o yaml
egctl x status gateway -v
Deploying cert-manager
This is a summary of cert-manager Installation with Helm.
Installing cert-manager is straight-forward, but currently (v1.12) requires setting a feature gate to enable the Gateway API support.
$ helm repo add jetstack https://charts.jetstack.io
$ helm upgrade --install --create-namespace --namespace cert-manager --set installCRDs=true --set featureGates=ExperimentalGatewayAPISupport=true cert-manager jetstack/cert-manager
You should now have cert-manager
running with nothing to do:
$ kubectl wait --for=condition=Available deployment -n cert-manager --all
deployment.apps/cert-manager condition met
deployment.apps/cert-manager-cainjector condition met
deployment.apps/cert-manager-webhook condition met
$ kubectl get -n cert-manager deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cert-manager 1/1 1 1 42m
cert-manager-cainjector 1/1 1 1 42m
cert-manager-webhook 1/1 1 1 42m
A Self-Signing Issuer
cert-manager can have any number of issuer configurations. The simplest issuer type is SelfSigned. It simply takes the certificate request and signs it with the private key it generates for the TLS Secret.
Self-signed certificates don't provide any help in establishing trust between certificates.
However, they are great for initial testing, due to their simplicity.
To install self-signing, run
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
EOF
Creating a TLS Gateway Listener
We now have to patch the example Gateway to reference cert-manager:
$ kubectl patch gateway/eg --patch '
metadata:
annotations:
cert-manager.io/cluster-issuer: selfsigned
cert-manager.io/common-name: "Hello World!"
spec:
listeners:
- name: https
protocol: HTTPS
hostname: www.example.com
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: eg-https
' --type=merge
You could instead create a new Gateway serving HTTPS, if you’d prefer. cert-manager doesn’t care, but we’ll keep it all together in this task.
Nowadays, X.509 certificates don’t use the subject Common Name for hostname matching, so you can set it to whatever you want, or leave it empty. The important parts here are
- the annotation referencing the “selfsigned” ClusterIssuer we created above,
- the
hostname
, which is required (but see #6051 for computing it based on attached HTTPRoutes), and - the named Secret, which is what cert-manager will create for us.
The annotations are documented in Supported Annotations.
Patching the Gateway makes cert-manager create a self-signed certificate within a few seconds.
Eventually, the Gateway becomes Programmed
again:
$ kubectl wait --for=condition=Programmed gateway/eg
gateway.gateway.networking.k8s.io/eg condition met
Testing The Gateway
See Testing in Secure Gateways for the general idea.
Since we have a self-signed certificate, curl
will by default reject it, requiring the -k
flag:
$ curl -kv -HHost:www.example.com https://127.0.0.1/get
...
* Server certificate:
* subject: CN=Hello World!
...
< HTTP/2 200
...
How cert-manager and Envoy Gateway Interact
This explains cert-manager Concepts in an Envoy Gateway context.
In the interaction between the two, cert-manager does all the heavy lifting.
It subscribes to changes to Gateway resources (using the gateway-shim
component.)
For any Gateway it finds, it looks for any TLS listeners, and the associated tls.certificateRefs
.
Note that while Gateway API supports multiple refs here, Envoy Gateway only uses one.
cert-manager also looks at the hostname
of the listener to figure out which hosts the certificate is expected to cover.
More than one listener can use the same certificate Secret, which means cert-manager needs to find all listeners using the same Secret before deciding what to do.
If the certificatRef
points to a valid certificate, given the hostnames found in listeners, cert-manager has nothing to do.
If there is no valid certificate, or it is about to expire, cert-manager’s gateway-shim
creates a Certificate resource, or updates the existing one.
cert-manager then follows the Certificate Lifecycle.
To know how to issue the certificate, an ClusterIssuer is configured, and referenced through annotations on the Gateway resource, which you did above.
Once a matching ClusterIssuer is found, that plugin does what needs to be done to acquire a signed certificate.
In the case of the ACME protocol (used by Let’s Encrypt), cert-manager can also use an HTTP Gateway to solve the HTTP-01 challenge type. This is the other side of cert-manager’s Gateway API support: the ACME issuer creates a temporary HTTPRoute, lets the ACME server(s) query it, and deletes it again.
cert-manager then updates the Secret that the Gateway’s listener points to in tls.certificateRefs
.
Envoy Gateway picks up that the Secret has changed, and reloads the corresponding Envoy Proxy Deployments with the new private key and certificate.
As you can imagine, cert-manager requires quite broad permissions to update Secrets in any namespace, so the security-minded reader may want to look at the RBAC resources the Helm chart creates.
Using the ACME Issuer With Let’s Encrypt and HTTP-01
We will start using the Let’s Encrypt staging environment, to spare their production environment. Our Gateway already contains an HTTP listener, so we will use that for the HTTP-01 challenges.
$ CERT_MANAGER_CONTACT_EMAIL=$(git config user.email) # Or whatever...
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: "$CERT_MANAGER_CONTACT_EMAIL"
privateKeySecretRef:
name: letsencrypt-staging-account-key
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- kind: Gateway
name: eg
namespace: default
EOF
The important parts are
- using
spec.acme
with a server URI and contact email address, and - referencing our plain HTTP gateway so the challenge HTTPRoute is attached to the right place.
Check the account registration process using the Ready condition:
$ kubectl wait --for=condition=Ready clusterissuer/letsencrypt-staging
$ kubectl describe clusterissuer/letsencrypt-staging
...
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/123456789
Conditions:
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
...
Now we’re ready to update the Gateway annotation to use this issuer instead:
$ kubectl annotate --overwrite gateway/eg cert-manager.io/cluster-issuer=letsencrypt-staging
The Gateway should be picked up by cert-manager, which will create a new certificate for you, and replace the Secret.
You should see a new CertificateRequest to track:
$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
eg-https-xxxxx True True selfsigned system:serviceaccount:cert-manager:cert-manager 42m
eg-https-xxxxx True True letsencrypt-staging system:serviceaccount:cert-manager:cert-manager 42m
Testing The Gateway
We still require the -k
flag, since the Let’s Encrypt staging environment CA is not widely trusted.
$ curl -kv -HHost:www.example.com https://127.0.0.1/get
...
* Server certificate:
* subject: CN=Hello World!
* issuer: C=US; O=(STAGING) Let's Encrypt; CN=(STAGING) Ersatz Edamame E1
...
< HTTP/2 200
...
Using The Let’s Encrypt Production Environment
Changing to the production environment is just a matter of replacing the server URI:
$ CERT_MANAGER_CONTACT_EMAIL=$(git config user.email) # Or whatever...
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory # Removed "-staging".
email: "$CERT_MANAGER_CONTACT_EMAIL"
privateKeySecretRef:
name: letsencrypt-account-key # Removed "-staging".
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- kind: Gateway
name: eg
namespace: default
EOF
And now you can update the Gateway listener to point to letsencrypt
instead:
$ kubectl annotate --overwrite gateway/eg cert-manager.io/cluster-issuer=letsencrypt
As before, track it by looking at CertificateRequests.
Testing The Gateway
Once the certificate has been replaced, we should finally be able to get rid of the -k
flag:
$ curl -v -HHost:www.example.com --resolve www.example.com:127.0.0.1 https://www.example.com/get
...
* Server certificate:
* subject: CN=Hello World!
* issuer: C=US; O=Let's Encrypt; CN=R3
...
< HTTP/2 200
...
Collecting Garbage
You probably want to set the cert-manager.io/revision-history-limit
annotation on your Gateway to make cert-manager prune the CertificateRequest history.
cert-manager deletes unused Certificate resources, and they are updated in-place when possible, so there should be no need for cleaning up Certificate resources.
The deletion is based on whether a Gateway still holds a tls.certificateRefs
that requires the Certificate.
If you remove a TLS listener from a Gateway, you may still have a Secret lingering. cert-manager can clean it up using a flag.
Issuer Namespaces
We have used ClusterIssuer resources in this tutorial. They are not bound to any namespace, and will read annotations from Gateways in any namespace. You could also use Issuer, which is bound to a namespace. This is useful e.g. if you want to use different ACME accounts for different namespaces.
If you change the issuer kind, you also need to change the annotation key from cert-manager.io/clusterissuer
to cert-manager.io/issuer
.
Using ExternalDNS
The ExternalDNS controller maintains DNS records based on Kubernetes resources. Together with cert-manager, this can be used to fully automate hostname management. It can use various source resources, among them Gateway Routes. Just specify a Gateway Route resource, let ExternalDNS create the domain records, and then cert-manager the TLS certificate.
The tutorial on Gateway API uses kubectl. They also have a Helm chart, which is easier to customize. The only thing relevant to Envoy Gateway is to set the sources:
# values.yaml
sources:
- gateway-httproute
- gateway-grpcroute
- gateway-tcproute
- gateway-tlsroute
- gateway-udproute
Monitoring Progress / Troubleshooting
You can monitor progress in several ways:
The Issuer has a Ready condition (though this is rather boring for the selfSigned
type):
$ kubectl get issuer --all-namespaces
NAMESPACE NAME READY AGE
default selfsigned True 42m
The Gateway will say when it has an invalid certificate:
$ kubectl describe gateway/eg
...
Conditions:
Message: Secret default/eg-https does not exist.
Reason: InvalidCertificateRef
Status: False
Type: ResolvedRefs
...
Message: Listener is invalid, see other Conditions for details.
Reason: Invalid
Status: False
Type: Programmed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BadConfig 42m cert-manager-gateway-shim Skipped a listener block: spec.listeners[1].hostname: Required value: the hostname cannot be empty
The main question is if cert-manager has picked up on the Gateway.
I.e., has it created a Certificate for it?
The above describe
contains an event from cert-manager-gateway-shim
telling you of one such issue.
Be aware that if you have a non-TLS listener in the Gateway, like we did, there will be events saying it is not eligible, which is of course expected.
Another option is looking at Deployment logs.
The cert-manager logs are not very verbose by default, but setting the Helm value global.logLevel
to 6 will enable all debug logs (the default is 2.)
This will make them verbose enough to say why a Gateway wasn’t considered (e.g. missing hostname
, or tls.mode
is not Terminate
.)
$ kubectl logs -n cert-manager deployment/cert-manager
...
Simply listing Certificate resources may be useful, even if it just gives a yes/no answer:
$ kubectl get certificate --all-namespaces
NAMESPACE NAME READY SECRET AGE
default eg-https True eg-https 42m
If there is a Certificate, then the gateway-shim
has recognized the Gateway.
But is there a CertificateRequest for it?
(BTW, don’t confuse this with a CertificateSigningRequest, which is a Kubernetes core resource type representing the same thing.)
$ kubectl get certificaterequest --all-namespaces
NAMESPACE NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
default eg-https-xxxxx True True selfsigned system:serviceaccount:cert-manager:cert-manager 42m
The ACME issuer also has Order
and Challenge
resources to watch:
$ kubectl get order --all-namespaces -o wide
NAME STATE ISSUER REASON AGE
order.acme.cert-manager.io/envoy-https-xxxxx-123456789 pending letsencrypt-staging 42m
$ kubectl get challenge --all-namespaces
NAME STATE DOMAIN AGE
challenge.acme.cert-manager.io/envoy-https-xxxxx-123456789-1234567890 pending www.example.com 42m
Using kubetctl get -o wide
or kubectl describe
on the Challenge will explain its state more.
$ kubectl get order --all-namespaces -o wide
NAME STATE ISSUER REASON AGE
order.acme.cert-manager.io/envoy-https-xxxxx-123456789 valid letsencrypt-staging 42m
Finally, since cert-manager creates the Secret referenced by the Gateway listener as its last step, we can also look for that:
$ kubectl get secret secret/eg-https
NAME TYPE DATA AGE
eg-https kubernetes.io/tls 3 42m
Clean Up
- Uninstall cert-manager:
helm uninstall --namespace cert-manager cert-manager
- Delete the
cert-manager
namespace:kubectl delete namespace/cert-manager
- Delete the
https
listener fromgateway/eg
. - Delete
secret/eg-https
.