This is the multi-page printable view of this section. Click here to print.
Installation
1 - Install with Helm
Helm is a package manager for Kubernetes that automates the release and management of software on Kubernetes.
Envoy Gateway can be installed via a Helm chart with a few simple steps, depending on if you are deploying for the first time, upgrading Envoy Gateway from an existing installation, or migrating from Envoy Gateway.
Before you begin
Compatibility Matrix
Refer to the Version Compatibility Matrix to learn more.The Envoy Gateway Helm chart is hosted by DockerHub.
It is published at oci://docker.io/envoyproxy/gateway-helm
.
Note
We use v0.0.0-latest
as the latest development version.
You can visit Envoy Gateway Helm Chart for more releases.
Install with Helm
Envoy Gateway is typically deployed to Kubernetes from the command line. If you don’t have Kubernetes, you should use kind
to create one.
Developer Guide
Refer to the Developer Guide to learn more.Install the Gateway API CRDs and Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.1.4 -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
Install the GatewayClass, Gateway, HTTPRoute and example app:
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/v1.1.4/quickstart.yaml -n default
Note: quickstart.yaml
defines that Envoy Gateway will listen for
traffic on port 80 on its globally-routable IP address, to make it easy to use
browsers to test Envoy Gateway. When Envoy Gateway sees that its Listener is
using a privileged port (<1024), it will map this internally to an
unprivileged port, so that Envoy Gateway doesn’t need additional privileges.
It’s important to be aware of this mapping, since you may need to take it into
consideration when debugging.
Helm chart customizations
Some of the quick ways of using the helm install command for envoy gateway installation are below.
Helm Chart Values
If you want to know all the available fields inside the values.yaml file, please see the Helm Chart Values.Increase the replicas
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.1.4 -n envoy-gateway-system --create-namespace --set deployment.replicas=2
Change the kubernetesClusterDomain name
If you have installed your cluster with different domain name you can use below command.
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.1.4 -n envoy-gateway-system --create-namespace --set kubernetesClusterDomain=<domain name>
Note: Above are some of the ways we can directly use for customization of our installation. But if you are looking for more complex changes values.yaml comes to rescue.
Using values.yaml file for complex installation
deployment:
envoyGateway:
resources:
limits:
cpu: 700m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
ports:
- name: grpc
port: 18005
targetPort: 18000
- name: ratelimit
port: 18006
targetPort: 18001
config:
envoyGateway:
logging:
level:
default: debug
Here we have made three changes to our values.yaml file. Increase the resources limit for cpu to 700m
, changed the port for grpc to 18005
and for ratelimit to 18006
and also updated the logging level to debug
.
You can use the below command to install the envoy gateway using values.yaml file.
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.1.4 -n envoy-gateway-system --create-namespace -f values.yaml
Open Ports
These are the ports used by Envoy Gateway and the managed Envoy Proxy.
Envoy Gateway
Envoy Gateway | Address | Port | Configurable |
---|---|---|---|
Xds EnvoyProxy Server | 0.0.0.0 | 18000 | No |
Xds RateLimit Server | 0.0.0.0 | 18001 | No |
Admin Server | 127.0.0.1 | 19000 | Yes |
Metrics Server | 0.0.0.0 | 19001 | No |
Health Check | 127.0.0.1 | 8081 | No |
EnvoyProxy
Envoy Proxy | Address | Port |
---|---|---|
Admin Server | 127.0.0.1 | 19000 |
Heath Check | 0.0.0.0 | 19001 |
Next Steps
Envoy Gateway should now be successfully installed and running. To experience more abilities of Envoy Gateway, refer to Tasks.2 - Install with Kubernetes YAML
This task walks you through installing Envoy Gateway in your Kubernetes cluster.
The manual install process does not allow for as much control over configuration as the Helm install method, so if you need more control over your Envoy Gateway installation, it is recommended that you use helm.
Before you begin
Envoy Gateway is designed to run in Kubernetes for production. The most essential requirements are:
- Kubernetes 1.27 or later
- The
kubectl
command-line tool
Compatibility Matrix
Refer to the Version Compatibility Matrix to learn more.Install with YAML
Envoy Gateway is typically deployed to Kubernetes from the command line. If you don’t have Kubernetes, you should use kind
to create one.
Developer Guide
Refer to the Developer Guide to learn more.In your terminal, run the following command:
kubectl apply --server-side -f https://github.com/envoyproxy/gateway/releases/download/v1.1.4/install.yaml
Next Steps
Envoy Gateway should now be successfully installed and running, but in order to experience more abilities of Envoy Gateway, you can refer to Tasks.
Upgrading from v1.0
Due to breaking changes in Gateway API v1.1, some manual migration steps are required to upgrade Envoy Gateway to v1.1.
- Delete
BackendTLSPolicy
CRD (and resources):
kubectl delete crd backendtlspolicies.gateway.networking.k8s.io
- Update Gateway-API and Envoy Gateway CRDs:
helm pull oci://docker.io/envoyproxy/gateway-helm --version v1.1.4 --untar
kubectl apply --force-conflicts --server-side -f ./gateway-helm/crds/gatewayapi-crds.yaml
kubectl apply --force-conflicts --server-side -f ./gateway-helm/crds/generated
Update your
BackendTLSPolicy
andGRPCRoute
resources according to Gateway-API v1.1 Upgrade NotesUpdate your Envoy Gateway xPolicy resources: remove the namespace section from targetRef.
Install Envoy Gateway v1.1.4:
helm upgrade eg oci://docker.io/envoyproxy/gateway-helm --version v1.1.4 -n envoy-gateway-system
3 - Install egctl
What is egctl?
egctl
is a command line tool to provide additional functionality for Envoy Gateway users.This task shows how to install the egctl CLI. egctl can be installed either from source, or from pre-built binary releases.
From The Envoy Gateway Project
The Envoy Gateway project provides two ways to fetch and install egctl. These are the official methods to get egctl releases. Installation through those methods can be found below the official methods.
Every release of egctl provides binary releases for a variety of OSes. These binary versions can be manually downloaded and installed.
- Download your desired version
- Unpack it (tar -zxvf egctl_latest_linux_amd64.tar.gz)
- Find the egctl binary in the unpacked directory, and move it to its desired destination (mv bin/linux/amd64/egctl /usr/local/bin/egctl)
From there, you should be able to run: egctl help
.
egctl
now has an installer script that will automatically grab the latest release version of egctl and install it locally.
You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.
curl -fsSL -o get-egctl.sh https://gateway.envoyproxy.io/get-egctl.sh
chmod +x get-egctl.sh
# get help info of the
bash get-egctl.sh --help
# install the latest development version of egctl
bash VERSION=latest get-egctl.sh
Yes, you can just use the below command if you want to live on the edge.
curl -fsSL https://gateway.envoyproxy.io/get-egctl.sh | VERSION=latest bash
You can also install egctl using homebrew:
brew install egctl
Next Steps
You can refer to the Use egctl task for more details about egctl.4 - Control Plane Authentication using custom certs
Envoy Gateway establishes a secure TLS connection for control plane communication between Envoy Gateway pods and the Envoy Proxy fleet. The TLS Certificates used here are self signed and generated using a job that runs before envoy gateway is created, and these certs and mounted on to the envoy gateway and envoy proxy pods.
This task will walk you through configuring custom certs for control plane auth.
Before you begin
We use Cert-Manager to manage the certificates. You can install it by following the official guide.
Configure custom certs for control plane
First you need to set up the CA issuer, in this task, we use the
selfsigned-issuer
as an example.You should not use the self-signed issuer in production, you should use a real CA issuer.
cat <<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Issuer metadata: labels: app.kubernetes.io/name: envoy-gateway name: selfsigned-issuer namespace: envoy-gateway-system spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: envoy-gateway-ca namespace: envoy-gateway-system spec: isCA: true commonName: envoy-gateway secretName: envoy-gateway-ca privateKey: algorithm: RSA size: 2048 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: labels: app.kubernetes.io/name: envoy-gateway name: eg-issuer namespace: envoy-gateway-system spec: ca: secretName: envoy-gateway-ca EOF
Create a cert for envoy gateway controller, the cert will be stored in secret
envoy-gatewy
.cat<<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: labels: app.kubernetes.io/name: envoy-gateway name: envoy-gateway namespace: envoy-gateway-system spec: commonName: envoy-gateway dnsNames: - "envoy-gateway" - "envoy-gateway.envoy-gateway-system" - "envoy-gateway.envoy-gateway-system.svc" - "envoy-gateway.envoy-gateway-system.svc.cluster.local" issuerRef: kind: Issuer name: eg-issuer usages: - "digital signature" - "data encipherment" - "key encipherment" - "content commitment" secretName: envoy-gateway EOF
Create a cert for envoy proxy, the cert will be stored in secret
envoy
.cat<<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: labels: app.kubernetes.io/name: envoy-gateway name: envoy namespace: envoy-gateway-system spec: commonName: "*" dnsNames: - "*.envoy-gateway-system" issuerRef: kind: Issuer name: eg-issuer usages: - "digital signature" - "data encipherment" - "key encipherment" - "content commitment" secretName: envoy EOF
Create a cert for rate limit, the cert will be stored in secret
envoy-rate-limit
.cat<<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: labels: app.kubernetes.io/name: envoy-gateway name: envoy-rate-limit namespace: envoy-gateway-system spec: commonName: "*" dnsNames: - "*.envoy-gateway-system" issuerRef: kind: Issuer name: eg-issuer usages: - "digital signature" - "data encipherment" - "key encipherment" - "content commitment" secretName: envoy-rate-limit EOF
Now you can follow the helm chart installation guide to install envoy gateway with custom certs.
5 - Gateway Addons Helm Chart
An Add-ons Helm chart for Envoy Gateway
Homepage: https://gateway.envoyproxy.io/
Maintainers
Name | Url | |
---|---|---|
envoy-gateway-steering-committee | https://github.com/envoyproxy/gateway/blob/main/GOVERNANCE.md | |
envoy-gateway-maintainers | https://github.com/envoyproxy/gateway/blob/main/CODEOWNERS |
Source Code
Requirements
Repository | Name | Version |
---|---|---|
https://fluent.github.io/helm-charts | fluent-bit | 0.30.4 |
https://grafana.github.io/helm-charts | grafana | 8.0.0 |
https://grafana.github.io/helm-charts | loki | 4.8.0 |
https://grafana.github.io/helm-charts | tempo | 1.3.1 |
https://open-telemetry.github.io/opentelemetry-helm-charts | opentelemetry-collector | 0.73.1 |
https://prometheus-community.github.io/helm-charts | prometheus | 25.21.0 |
Values
Key | Type | Default | Description |
---|---|---|---|
fluent-bit.config.filters | string | "[FILTER]\n Name kubernetes\n Match kube.*\n Merge_Log On\n Keep_Log Off\n K8S-Logging.Parser On\n K8S-Logging.Exclude On\n\n[FILTER]\n Name grep\n Match kube.*\n Regex $kubernetes['container_name'] ^envoy$\n\n[FILTER]\n Name parser\n Match kube.*\n Key_Name log\n Parser envoy\n Reserve_Data True\n" | |
fluent-bit.config.inputs | string | "[INPUT]\n Name tail\n Path /var/log/containers/*.log\n multiline.parser docker, cri\n Tag kube.*\n Mem_Buf_Limit 5MB\n Skip_Long_Lines On\n" | |
fluent-bit.config.outputs | string | "[OUTPUT]\n Name loki\n Match kube.*\n Host loki.monitoring.svc.cluster.local\n Port 3100\n Labels job=fluentbit, app=$kubernetes['labels']['app'], k8s_namespace_name=$kubernetes['namespace_name'], k8s_pod_name=$kubernetes['pod_name'], k8s_container_name=$kubernetes['container_name']\n" | |
fluent-bit.config.service | string | "[SERVICE]\n Daemon Off\n Flush {{ .Values.flush }}\n Log_Level {{ .Values.logLevel }}\n Parsers_File parsers.conf\n Parsers_File custom_parsers.conf\n HTTP_Server On\n HTTP_Listen 0.0.0.0\n HTTP_Port {{ .Values.metricsPort }}\n Health_Check On\n" | |
fluent-bit.enabled | bool | true | |
fluent-bit.fullnameOverride | string | "fluent-bit" | |
fluent-bit.image.repository | string | "fluent/fluent-bit" | |
fluent-bit.podAnnotations.“fluentbit.io/exclude” | string | "true" | |
fluent-bit.podAnnotations.“prometheus.io/path” | string | "/api/v1/metrics/prometheus" | |
fluent-bit.podAnnotations.“prometheus.io/port” | string | "2020" | |
fluent-bit.podAnnotations.“prometheus.io/scrape” | string | "true" | |
fluent-bit.testFramework.enabled | bool | false | |
grafana.adminPassword | string | "admin" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.apiVersion | int | 1 | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].disableDeletion | bool | false | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].editable | bool | true | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].folder | string | "envoy-gateway" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].name | string | "envoy-gateway" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].options.path | string | "/var/lib/grafana/dashboards/envoy-gateway" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].orgId | int | 1 | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].type | string | "file" | |
grafana.dashboardsConfigMaps.envoy-gateway | string | "grafana-dashboards" | |
grafana.datasources.“datasources.yaml”.apiVersion | int | 1 | |
grafana.datasources.“datasources.yaml”.datasources[0].name | string | "Prometheus" | |
grafana.datasources.“datasources.yaml”.datasources[0].type | string | "prometheus" | |
grafana.datasources.“datasources.yaml”.datasources[0].url | string | "http://prometheus" | |
grafana.enabled | bool | true | |
grafana.fullnameOverride | string | "grafana" | |
grafana.service.type | string | "LoadBalancer" | |
loki.backend.replicas | int | 0 | |
loki.deploymentMode | string | "SingleBinary" | |
loki.enabled | bool | true | |
loki.fullnameOverride | string | "loki" | |
loki.gateway.enabled | bool | false | |
loki.loki.auth_enabled | bool | false | |
loki.loki.commonConfig.replication_factor | int | 1 | |
loki.loki.compactorAddress | string | "loki" | |
loki.loki.memberlist | string | "loki-memberlist" | |
loki.loki.rulerConfig.storage.type | string | "local" | |
loki.loki.storage.type | string | "filesystem" | |
loki.monitoring.lokiCanary.enabled | bool | false | |
loki.monitoring.selfMonitoring.enabled | bool | false | |
loki.monitoring.selfMonitoring.grafanaAgent.installOperator | bool | false | |
loki.read.replicas | int | 0 | |
loki.singleBinary.replicas | int | 1 | |
loki.test.enabled | bool | false | |
loki.write.replicas | int | 0 | |
opentelemetry-collector.config.exporters.logging.verbosity | string | "detailed" | |
opentelemetry-collector.config.exporters.loki.endpoint | string | "http://loki.monitoring.svc:3100/loki/api/v1/push" | |
opentelemetry-collector.config.exporters.otlp.endpoint | string | "tempo.monitoring.svc:4317" | |
opentelemetry-collector.config.exporters.otlp.tls.insecure | bool | true | |
opentelemetry-collector.config.exporters.prometheus.endpoint | string | "0.0.0.0:19001" | |
opentelemetry-collector.config.extensions.health_check | object | {} | |
opentelemetry-collector.config.processors.attributes.actions[0].action | string | "insert" | |
opentelemetry-collector.config.processors.attributes.actions[0].key | string | "loki.attribute.labels" | |
opentelemetry-collector.config.processors.attributes.actions[0].value | string | "k8s.pod.name, k8s.namespace.name" | |
opentelemetry-collector.config.receivers.otlp.protocols.grpc.endpoint | string | "${env:MY_POD_IP}:4317" | |
opentelemetry-collector.config.receivers.otlp.protocols.http.endpoint | string | "${env:MY_POD_IP}:4318" | |
opentelemetry-collector.config.receivers.zipkin.endpoint | string | "${env:MY_POD_IP}:9411" | |
opentelemetry-collector.config.service.extensions[0] | string | "health_check" | |
opentelemetry-collector.config.service.pipelines.logs.exporters[0] | string | "loki" | |
opentelemetry-collector.config.service.pipelines.logs.processors[0] | string | "attributes" | |
opentelemetry-collector.config.service.pipelines.logs.receivers[0] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.metrics.exporters[0] | string | "prometheus" | |
opentelemetry-collector.config.service.pipelines.metrics.receivers[0] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.traces.exporters[0] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.traces.receivers[0] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.traces.receivers[1] | string | "zipkin" | |
opentelemetry-collector.enabled | bool | false | |
opentelemetry-collector.fullnameOverride | string | "otel-collector" | |
opentelemetry-collector.mode | string | "deployment" | |
prometheus.alertmanager.enabled | bool | false | |
prometheus.enabled | bool | true | |
prometheus.kube-state-metrics.enabled | bool | false | |
prometheus.prometheus-node-exporter.enabled | bool | false | |
prometheus.prometheus-pushgateway.enabled | bool | false | |
prometheus.server.fullnameOverride | string | "prometheus" | |
prometheus.server.global.scrape_interval | string | "15s" | |
prometheus.server.image.repository | string | "prom/prometheus" | |
prometheus.server.persistentVolume.enabled | bool | false | |
prometheus.server.readinessProbeInitialDelay | int | 0 | |
prometheus.server.securityContext | object | {} | |
prometheus.server.service.type | string | "LoadBalancer" | |
tempo.enabled | bool | true | |
tempo.fullnameOverride | string | "tempo" | |
tempo.service.type | string | "LoadBalancer" |
6 - Gateway Helm Chart
The Helm chart for Envoy Gateway
Homepage: https://gateway.envoyproxy.io/
Maintainers
Name | Url | |
---|---|---|
envoy-gateway-steering-committee | https://github.com/envoyproxy/gateway/blob/main/GOVERNANCE.md | |
envoy-gateway-maintainers | https://github.com/envoyproxy/gateway/blob/main/CODEOWNERS |
Source Code
Values
Key | Type | Default | Description |
---|---|---|---|
certgen | object | {"job":{"annotations":{},"resources":{},"ttlSecondsAfterFinished":30},"rbac":{"annotations":{},"labels":{}}} | Certgen is used to generate the certificates required by EnvoyGateway. If you want to construct a custom certificate, you can generate a custom certificate through Cert-Manager before installing EnvoyGateway. Certgen will not overwrite the custom certificate. Please do not manually modify values.yaml to disable certgen, it may cause EnvoyGateway OIDC,OAuth2,etc. to not work as expected. |
config.envoyGateway.gateway.controllerName | string | "gateway.envoyproxy.io/gatewayclass-controller" | |
config.envoyGateway.logging.level.default | string | "info" | |
config.envoyGateway.provider.type | string | "Kubernetes" | |
createNamespace | bool | false | |
deployment.envoyGateway.image.repository | string | "" | |
deployment.envoyGateway.image.tag | string | "" | |
deployment.envoyGateway.imagePullPolicy | string | "" | |
deployment.envoyGateway.imagePullSecrets | list | [] | |
deployment.envoyGateway.resources.limits.cpu | string | "500m" | |
deployment.envoyGateway.resources.limits.memory | string | "1024Mi" | |
deployment.envoyGateway.resources.requests.cpu | string | "100m" | |
deployment.envoyGateway.resources.requests.memory | string | "256Mi" | |
deployment.pod.affinity | object | {} | |
deployment.pod.annotations.“prometheus.io/port” | string | "19001" | |
deployment.pod.annotations.“prometheus.io/scrape” | string | "true" | |
deployment.pod.labels | object | {} | |
deployment.pod.tolerations | list | [] | |
deployment.pod.topologySpreadConstraints | list | [] | |
deployment.ports[0].name | string | "grpc" | |
deployment.ports[0].port | int | 18000 | |
deployment.ports[0].targetPort | int | 18000 | |
deployment.ports[1].name | string | "ratelimit" | |
deployment.ports[1].port | int | 18001 | |
deployment.ports[1].targetPort | int | 18001 | |
deployment.ports[2].name | string | "wasm" | |
deployment.ports[2].port | int | 18002 | |
deployment.ports[2].targetPort | int | 18002 | |
deployment.ports[3].name | string | "metrics" | |
deployment.ports[3].port | int | 19001 | |
deployment.ports[3].targetPort | int | 19001 | |
deployment.replicas | int | 1 | |
global.images.envoyGateway.image | string | nil | |
global.images.envoyGateway.pullPolicy | string | nil | |
global.images.envoyGateway.pullSecrets | list | [] | |
global.images.ratelimit.image | string | "docker.io/envoyproxy/ratelimit:master" | |
global.images.ratelimit.pullPolicy | string | "IfNotPresent" | |
global.images.ratelimit.pullSecrets | list | [] | |
kubernetesClusterDomain | string | "cluster.local" | |
podDisruptionBudget.minAvailable | int | 0 |