This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Installation

This section includes installation related contents of Envoy Gateway.

1 - Install with Helm

Helm is a package manager for Kubernetes that automates the release and management of software on Kubernetes.

Envoy Gateway can be installed via a Helm chart with a few simple steps, depending on if you are deploying for the first time, upgrading Envoy Gateway from an existing installation, or migrating from Envoy Gateway.

Before you begin

The Envoy Gateway Helm chart is hosted by DockerHub.

It is published at oci://docker.io/envoyproxy/gateway-helm.

Install with Helm

Envoy Gateway is typically deployed to Kubernetes from the command line. If you don’t have Kubernetes, you should use kind to create one.

Install the Gateway API CRDs and Envoy Gateway:

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.2.3 -n envoy-gateway-system --create-namespace

Wait for Envoy Gateway to become available:

kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available

Install the GatewayClass, Gateway, HTTPRoute and example app:

kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/v1.2.3/quickstart.yaml -n default

Note: quickstart.yaml defines that Envoy Gateway will listen for traffic on port 80 on its globally-routable IP address, to make it easy to use browsers to test Envoy Gateway. When Envoy Gateway sees that its Listener is using a privileged port (<1024), it will map this internally to an unprivileged port, so that Envoy Gateway doesn’t need additional privileges. It’s important to be aware of this mapping, since you may need to take it into consideration when debugging.

Upgrading from a previous version

Helm does not update CRDs that live in the /crds folder in the Helm Chart. So you will manually need to update the CRDs. Follow the steps outlined in this section if you’re upgrading from a previous version.

Helm chart customizations

Some of the quick ways of using the helm install command for envoy gateway installation are below.

Increase the replicas

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.2.3 -n envoy-gateway-system --create-namespace --set deployment.replicas=2

Change the kubernetesClusterDomain name

If you have installed your cluster with different domain name you can use below command.

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.2.3 -n envoy-gateway-system --create-namespace --set kubernetesClusterDomain=<domain name>

Note: Above are some of the ways we can directly use for customization of our installation. But if you are looking for more complex changes values.yaml comes to rescue.

Using values.yaml file for complex installation

deployment:
  envoyGateway:
    resources:
      limits:
        cpu: 700m
        memory: 128Mi
      requests:
        cpu: 10m
        memory: 64Mi
  ports:
    - name: grpc
      port: 18005
      targetPort: 18000
    - name: ratelimit
      port: 18006
      targetPort: 18001

config:
  envoyGateway:
    logging:
      level:
        default: debug

Here we have made three changes to our values.yaml file. Increase the resources limit for cpu to 700m, changed the port for grpc to 18005 and for ratelimit to 18006 and also updated the logging level to debug.

You can use the below command to install the envoy gateway using values.yaml file.

helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.2.3 -n envoy-gateway-system --create-namespace -f values.yaml

Open Ports

These are the ports used by Envoy Gateway and the managed Envoy Proxy.

Envoy Gateway

Envoy GatewayAddressPortConfigurable
Xds EnvoyProxy Server0.0.0.018000No
Xds RateLimit Server0.0.0.018001No
Admin Server127.0.0.119000Yes
Metrics Server0.0.0.019001No
Health Check127.0.0.18081No

EnvoyProxy

Envoy ProxyAddressPort
Admin Server127.0.0.119000
Heath Check0.0.0.019001

2 - Install with Kubernetes YAML

This task walks you through installing Envoy Gateway in your Kubernetes cluster.

The manual install process does not allow for as much control over configuration as the Helm install method, so if you need more control over your Envoy Gateway installation, it is recommended that you use helm.

Before you begin

Envoy Gateway is designed to run in Kubernetes for production. The most essential requirements are:

  • Kubernetes 1.28 or later
  • The kubectl command-line tool

Install with YAML

Envoy Gateway is typically deployed to Kubernetes from the command line. If you don’t have Kubernetes, you should use kind to create one.

  1. In your terminal, run the following command:

    kubectl apply --server-side -f https://github.com/envoyproxy/gateway/releases/download/v1.2.3/install.yaml
    
  2. Next Steps

    Envoy Gateway should now be successfully installed and running, but in order to experience more abilities of Envoy Gateway, you can refer to Tasks.

Upgrading from v1.1

Some manual migration steps are required to upgrade Envoy Gateway to v1.2.

  1. Update your GRPCRoute and ReferenceGrant resources if the storage version being used is v1alpha2. Follow the steps in Gateway-API v1.2 Upgrade Notes

  2. Update Gateway-API and Envoy Gateway CRDs:

helm pull oci://docker.io/envoyproxy/gateway-helm --version v1.2.3 --untar
kubectl apply --force-conflicts --server-side -f ./gateway-helm/crds/gatewayapi-crds.yaml
kubectl apply --force-conflicts --server-side -f ./gateway-helm/crds/generated
  1. Install Envoy Gateway v1.2.3:
helm upgrade eg oci://docker.io/envoyproxy/gateway-helm --version v1.2.3 -n envoy-gateway-system

3 - Install egctl

This task shows how to install the egctl CLI. egctl can be installed either from source, or from pre-built binary releases.

From The Envoy Gateway Project

The Envoy Gateway project provides two ways to fetch and install egctl. These are the official methods to get egctl releases. Installation through those methods can be found below the official methods.

Every release of egctl provides binary releases for a variety of OSes. These binary versions can be manually downloaded and installed.

  1. Download your desired version
  2. Unpack it (tar -zxvf egctl_latest_linux_amd64.tar.gz)
  3. Find the egctl binary in the unpacked directory, and move it to its desired destination (mv bin/linux/amd64/egctl /usr/local/bin/egctl)

From there, you should be able to run: egctl help.

egctl now has an installer script that will automatically grab the latest release version of egctl and install it locally.

You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.

curl -fsSL -o get-egctl.sh https://gateway.envoyproxy.io/get-egctl.sh

chmod +x get-egctl.sh

# get help info of the 
bash get-egctl.sh --help

# install the latest development version of egctl
bash VERSION=latest get-egctl.sh

Yes, you can just use the below command if you want to live on the edge.

curl -fsSL https://gateway.envoyproxy.io/get-egctl.sh | VERSION=latest bash 

You can also install egctl using homebrew:

brew install egctl

4 - Control Plane Authentication using custom certs

Envoy Gateway establishes a secure TLS connection for control plane communication between Envoy Gateway pods and the Envoy Proxy fleet. The TLS Certificates used here are self signed and generated using a job that runs before envoy gateway is created, and these certs and mounted on to the envoy gateway and envoy proxy pods.

This task will walk you through configuring custom certs for control plane auth.

Before you begin

We use Cert-Manager to manage the certificates. You can install it by following the official guide.

Configure custom certs for control plane

  1. First you need to set up the CA issuer, in this task, we use the selfsigned-issuer as an example.

    You should not use the self-signed issuer in production, you should use a real CA issuer.

    cat <<EOF | kubectl apply -f -
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      labels:
        app.kubernetes.io/name: envoy-gateway
      name: selfsigned-issuer
      namespace: envoy-gateway-system
    spec:
      selfSigned: {}
    ---
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      name: envoy-gateway-ca
      namespace: envoy-gateway-system
    spec:
      isCA: true
      commonName: envoy-gateway
      secretName: envoy-gateway-ca
      privateKey:
        algorithm: RSA
        size: 2048
      issuerRef:
        name: selfsigned-issuer
        kind: Issuer
        group: cert-manager.io
    ---
    apiVersion: cert-manager.io/v1
    kind: Issuer
    metadata:
      labels:
        app.kubernetes.io/name: envoy-gateway
      name: eg-issuer
      namespace: envoy-gateway-system
    spec:
      ca:
        secretName: envoy-gateway-ca
    EOF
    
  2. Create a cert for envoy gateway controller, the cert will be stored in secret envoy-gatewy.

    cat<<EOF | kubectl apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      labels:
        app.kubernetes.io/name: envoy-gateway
      name: envoy-gateway
      namespace: envoy-gateway-system
    spec:
      commonName: envoy-gateway
      dnsNames:
      - "envoy-gateway"
      - "envoy-gateway.envoy-gateway-system"
      - "envoy-gateway.envoy-gateway-system.svc"
      - "envoy-gateway.envoy-gateway-system.svc.cluster.local"
      issuerRef:
        kind: Issuer
        name: eg-issuer
      usages:
      - "digital signature"
      - "data encipherment"
      - "key encipherment"
      - "content commitment"
      secretName: envoy-gateway
    EOF
    
  3. Create a cert for envoy proxy, the cert will be stored in secret envoy.

    cat<<EOF | kubectl apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      labels:
        app.kubernetes.io/name: envoy-gateway
      name: envoy
      namespace: envoy-gateway-system
    spec:
      commonName: "*"
      dnsNames:
      - "*.envoy-gateway-system"
      issuerRef:
        kind: Issuer
        name: eg-issuer
      usages:
      - "digital signature"
      - "data encipherment"
      - "key encipherment"
      - "content commitment"
      secretName: envoy
    EOF
    
  4. Create a cert for rate limit, the cert will be stored in secret envoy-rate-limit.

    cat<<EOF | kubectl apply -f -
    apiVersion: cert-manager.io/v1
    kind: Certificate
    metadata:
      labels:
        app.kubernetes.io/name: envoy-gateway
      name: envoy-rate-limit
      namespace: envoy-gateway-system
    spec:
      commonName: "*"
      dnsNames:
      - "*.envoy-gateway-system"
      issuerRef:
        kind: Issuer
        name: eg-issuer
      usages:
      - "digital signature"
      - "data encipherment"
      - "key encipherment"
      - "content commitment"
      secretName: envoy-rate-limit
    EOF
    
  5. Now you can follow the helm chart installation guide to install envoy gateway with custom certs.

5 - Gateway Addons Helm Chart

Version: v0.0.0-latest Type: application AppVersion: latest

An Add-ons Helm chart for Envoy Gateway

Homepage: https://gateway.envoyproxy.io/

Maintainers

NameEmailUrl
envoy-gateway-steering-committeehttps://github.com/envoyproxy/gateway/blob/main/GOVERNANCE.md
envoy-gateway-maintainershttps://github.com/envoyproxy/gateway/blob/main/CODEOWNERS

Source Code

Requirements

RepositoryNameVersion
https://fluent.github.io/helm-chartsfluent-bit0.30.4
https://grafana.github.io/helm-chartsalloy0.9.2
https://grafana.github.io/helm-chartsgrafana8.0.0
https://grafana.github.io/helm-chartsloki4.8.0
https://grafana.github.io/helm-chartstempo1.3.1
https://open-telemetry.github.io/opentelemetry-helm-chartsopentelemetry-collector0.108.0
https://prometheus-community.github.io/helm-chartsprometheus25.21.0

Values

KeyTypeDefaultDescription
alloy.alloy.configMap.contentstring"// Write your Alloy config here:\nlogging {\n level = \"info\"\n format = \"logfmt\"\n}\nloki.write \"alloy\" {\n endpoint {\n url = \"http://loki.monitoring.svc:3100/loki/api/v1/push\"\n }\n}\n// discovery.kubernetes allows you to find scrape targets from Kubernetes resources.\n// It watches cluster state and ensures targets are continually synced with what is currently running in your cluster.\ndiscovery.kubernetes \"pod\" {\n role = \"pod\"\n}\n\n// discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.\n// If no rules are defined, then the input targets are exported as-is.\ndiscovery.relabel \"pod_logs\" {\n targets = discovery.kubernetes.pod.targets\n\n // Label creation - \"namespace\" field from \"__meta_kubernetes_namespace\"\n rule {\n source_labels = [\"__meta_kubernetes_namespace\"]\n action = \"replace\"\n target_label = \"namespace\"\n }\n\n // Label creation - \"pod\" field from \"__meta_kubernetes_pod_name\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_name\"]\n action = \"replace\"\n target_label = \"pod\"\n }\n\n // Label creation - \"container\" field from \"__meta_kubernetes_pod_container_name\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_container_name\"]\n action = \"replace\"\n target_label = \"container\"\n }\n\n // Label creation - \"app\" field from \"__meta_kubernetes_pod_label_app_kubernetes_io_name\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_label_app_kubernetes_io_name\"]\n action = \"replace\"\n target_label = \"app\"\n }\n\n // Label creation - \"job\" field from \"__meta_kubernetes_namespace\" and \"__meta_kubernetes_pod_container_name\"\n // Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name\n rule {\n source_labels = [\"__meta_kubernetes_namespace\", \"__meta_kubernetes_pod_container_name\"]\n action = \"replace\"\n target_label = \"job\"\n separator = \"/\"\n replacement = \"$1\"\n }\n\n // Label creation - \"container\" field from \"__meta_kubernetes_pod_uid\" and \"__meta_kubernetes_pod_container_name\"\n // Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log\n rule {\n source_labels = [\"__meta_kubernetes_pod_uid\", \"__meta_kubernetes_pod_container_name\"]\n action = \"replace\"\n target_label = \"__path__\"\n separator = \"/\"\n replacement = \"/var/log/pods/*$1/*.log\"\n }\n\n // Label creation - \"container_runtime\" field from \"__meta_kubernetes_pod_container_id\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_container_id\"]\n action = \"replace\"\n target_label = \"container_runtime\"\n regex = \"^(\\\\S+):\\\\/\\\\/.+$\"\n replacement = \"$1\"\n }\n}\n\n// loki.source.kubernetes tails logs from Kubernetes containers using the Kubernetes API.\nloki.source.kubernetes \"pod_logs\" {\n targets = discovery.relabel.pod_logs.output\n forward_to = [loki.process.pod_logs.receiver]\n}\n// loki.process receives log entries from other Loki components, applies one or more processing stages,\n// and forwards the results to the list of receivers in the component’s arguments.\nloki.process \"pod_logs\" {\n stage.static_labels {\n values = {\n cluster = \"envoy-gateway\",\n }\n }\n\n forward_to = [loki.write.alloy.receiver]\n}"
alloy.enabledboolfalse
alloy.fullnameOverridestring"alloy"
fluent-bit.config.filtersstring"[FILTER]\n Name kubernetes\n Match kube.*\n Merge_Log On\n Keep_Log Off\n K8S-Logging.Parser On\n K8S-Logging.Exclude On\n\n[FILTER]\n Name grep\n Match kube.*\n Regex $kubernetes['container_name'] ^envoy$\n\n[FILTER]\n Name parser\n Match kube.*\n Key_Name log\n Parser envoy\n Reserve_Data True\n"
fluent-bit.config.inputsstring"[INPUT]\n Name tail\n Path /var/log/containers/*.log\n multiline.parser docker, cri\n Tag kube.*\n Mem_Buf_Limit 5MB\n Skip_Long_Lines On\n"
fluent-bit.config.outputsstring"[OUTPUT]\n Name loki\n Match kube.*\n Host loki.monitoring.svc.cluster.local\n Port 3100\n Labels job=fluentbit, app=$kubernetes['labels']['app'], k8s_namespace_name=$kubernetes['namespace_name'], k8s_pod_name=$kubernetes['pod_name'], k8s_container_name=$kubernetes['container_name']\n"
fluent-bit.config.servicestring"[SERVICE]\n Daemon Off\n Flush {{ .Values.flush }}\n Log_Level {{ .Values.logLevel }}\n Parsers_File parsers.conf\n Parsers_File custom_parsers.conf\n HTTP_Server On\n HTTP_Listen 0.0.0.0\n HTTP_Port {{ .Values.metricsPort }}\n Health_Check On\n"
fluent-bit.enabledbooltrue
fluent-bit.fullnameOverridestring"fluent-bit"
fluent-bit.image.repositorystring"fluent/fluent-bit"
fluent-bit.podAnnotations.“fluentbit.io/exclude”string"true"
fluent-bit.podAnnotations.“prometheus.io/path”string"/api/v1/metrics/prometheus"
fluent-bit.podAnnotations.“prometheus.io/port”string"2020"
fluent-bit.podAnnotations.“prometheus.io/scrape”string"true"
fluent-bit.testFramework.enabledboolfalse
grafana.adminPasswordstring"admin"
grafana.dashboardProviders.“dashboardproviders.yaml”.apiVersionint1
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].disableDeletionboolfalse
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].editablebooltrue
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].folderstring"envoy-gateway"
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].namestring"envoy-gateway"
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].options.pathstring"/var/lib/grafana/dashboards/envoy-gateway"
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].orgIdint1
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].typestring"file"
grafana.dashboardsConfigMaps.envoy-gatewaystring"grafana-dashboards"
grafana.datasources.“datasources.yaml”.apiVersionint1
grafana.datasources.“datasources.yaml”.datasources[0].namestring"Prometheus"
grafana.datasources.“datasources.yaml”.datasources[0].typestring"prometheus"
grafana.datasources.“datasources.yaml”.datasources[0].urlstring"http://prometheus"
grafana.enabledbooltrue
grafana.fullnameOverridestring"grafana"
grafana.service.typestring"LoadBalancer"
grafana.testFramework.enabledboolfalse
loki.backend.replicasint0
loki.deploymentModestring"SingleBinary"
loki.enabledbooltrue
loki.fullnameOverridestring"loki"
loki.gateway.enabledboolfalse
loki.loki.auth_enabledboolfalse
loki.loki.commonConfig.replication_factorint1
loki.loki.compactorAddressstring"loki"
loki.loki.memberliststring"loki-memberlist"
loki.loki.rulerConfig.storage.typestring"local"
loki.loki.storage.typestring"filesystem"
loki.monitoring.lokiCanary.enabledboolfalse
loki.monitoring.selfMonitoring.enabledboolfalse
loki.monitoring.selfMonitoring.grafanaAgent.installOperatorboolfalse
loki.read.replicasint0
loki.singleBinary.replicasint1
loki.test.enabledboolfalse
loki.write.replicasint0
opentelemetry-collector.config.exporters.debug.verbositystring"detailed"
opentelemetry-collector.config.exporters.loki.endpointstring"http://loki.monitoring.svc:3100/loki/api/v1/push"
opentelemetry-collector.config.exporters.otlp.endpointstring"tempo.monitoring.svc:4317"
opentelemetry-collector.config.exporters.otlp.tls.insecurebooltrue
opentelemetry-collector.config.exporters.prometheus.endpointstring"[${env:MY_POD_IP}]:19001"
opentelemetry-collector.config.extensions.health_check.endpointstring"[${env:MY_POD_IP}]:13133"
opentelemetry-collector.config.processors.attributes.actions[0].actionstring"insert"
opentelemetry-collector.config.processors.attributes.actions[0].keystring"loki.attribute.labels"
opentelemetry-collector.config.processors.attributes.actions[0].valuestring"k8s.pod.name, k8s.namespace.name"
opentelemetry-collector.config.receivers.datadog.endpointstring"[${env:MY_POD_IP}]:8126"
opentelemetry-collector.config.receivers.jaeger.protocols.grpc.endpointstring"[${env:MY_POD_IP}]:14250"
opentelemetry-collector.config.receivers.jaeger.protocols.thrift_compact.endpointstring"[${env:MY_POD_IP}]:6831"
opentelemetry-collector.config.receivers.jaeger.protocols.thrift_http.endpointstring"[${env:MY_POD_IP}]:14268"
opentelemetry-collector.config.receivers.otlp.protocols.grpc.endpointstring"[${env:MY_POD_IP}]:4317"
opentelemetry-collector.config.receivers.otlp.protocols.http.endpointstring"[${env:MY_POD_IP}]:4318"
opentelemetry-collector.config.receivers.prometheus.config.scrape_configs[0].job_namestring"opentelemetry-collector"
opentelemetry-collector.config.receivers.prometheus.config.scrape_configs[0].scrape_intervalstring"10s"
opentelemetry-collector.config.receivers.prometheus.config.scrape_configs[0].static_configs[0].targets[0]string"[${env:MY_POD_IP}]:8888"
opentelemetry-collector.config.receivers.zipkin.endpointstring"[${env:MY_POD_IP}]:9411"
opentelemetry-collector.config.service.extensions[0]string"health_check"
opentelemetry-collector.config.service.pipelines.logs.exporters[0]string"loki"
opentelemetry-collector.config.service.pipelines.logs.processors[0]string"attributes"
opentelemetry-collector.config.service.pipelines.logs.receivers[0]string"otlp"
opentelemetry-collector.config.service.pipelines.metrics.exporters[0]string"prometheus"
opentelemetry-collector.config.service.pipelines.metrics.receivers[0]string"datadog"
opentelemetry-collector.config.service.pipelines.metrics.receivers[1]string"otlp"
opentelemetry-collector.config.service.pipelines.traces.exporters[0]string"otlp"
opentelemetry-collector.config.service.pipelines.traces.receivers[0]string"datadog"
opentelemetry-collector.config.service.pipelines.traces.receivers[1]string"otlp"
opentelemetry-collector.config.service.pipelines.traces.receivers[2]string"zipkin"
opentelemetry-collector.config.service.telemetry.metrics.addressstring"[${env:MY_POD_IP}]:8888"
opentelemetry-collector.enabledboolfalse
opentelemetry-collector.fullnameOverridestring"otel-collector"
opentelemetry-collector.image.repositorystring"otel/opentelemetry-collector-contrib"
opentelemetry-collector.modestring"deployment"
prometheus.alertmanager.enabledboolfalse
prometheus.enabledbooltrue
prometheus.kube-state-metrics.enabledboolfalse
prometheus.prometheus-node-exporter.enabledboolfalse
prometheus.prometheus-pushgateway.enabledboolfalse
prometheus.server.fullnameOverridestring"prometheus"
prometheus.server.global.scrape_intervalstring"15s"
prometheus.server.image.repositorystring"prom/prometheus"
prometheus.server.persistentVolume.enabledboolfalse
prometheus.server.readinessProbeInitialDelayint0
prometheus.server.securityContextobject{}
prometheus.server.service.typestring"LoadBalancer"
tempo.enabledbooltrue
tempo.fullnameOverridestring"tempo"
tempo.service.typestring"LoadBalancer"

6 - Gateway Helm Chart

Version: v0.0.0-latest Type: application AppVersion: latest

The Helm chart for Envoy Gateway

Homepage: https://gateway.envoyproxy.io/

Maintainers

NameEmailUrl
envoy-gateway-steering-committeehttps://github.com/envoyproxy/gateway/blob/main/GOVERNANCE.md
envoy-gateway-maintainershttps://github.com/envoyproxy/gateway/blob/main/CODEOWNERS

Source Code

Values

KeyTypeDefaultDescription
certgenobject{"job":{"affinity":{},"annotations":{},"nodeSelector":{},"resources":{},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsGroup":65534,"runAsNonRoot":true,"runAsUser":65534,"seccompProfile":{"type":"RuntimeDefault"}},"tolerations":[],"ttlSecondsAfterFinished":30},"rbac":{"annotations":{},"labels":{}}}Certgen is used to generate the certificates required by EnvoyGateway. If you want to construct a custom certificate, you can generate a custom certificate through Cert-Manager before installing EnvoyGateway. Certgen will not overwrite the custom certificate. Please do not manually modify values.yaml to disable certgen, it may cause EnvoyGateway OIDC,OAuth2,etc. to not work as expected.
config.envoyGateway.gateway.controllerNamestring"gateway.envoyproxy.io/gatewayclass-controller"
config.envoyGateway.logging.level.defaultstring"info"
config.envoyGateway.provider.typestring"Kubernetes"
createNamespaceboolfalse
deployment.envoyGateway.image.repositorystring""
deployment.envoyGateway.image.tagstring""
deployment.envoyGateway.imagePullPolicystring""
deployment.envoyGateway.imagePullSecretslist[]
deployment.envoyGateway.resources.limits.memorystring"1024Mi"
deployment.envoyGateway.resources.requests.cpustring"100m"
deployment.envoyGateway.resources.requests.memorystring"256Mi"
deployment.envoyGateway.securityContext.allowPrivilegeEscalationboolfalse
deployment.envoyGateway.securityContext.capabilities.drop[0]string"ALL"
deployment.envoyGateway.securityContext.privilegedboolfalse
deployment.envoyGateway.securityContext.runAsGroupint65532
deployment.envoyGateway.securityContext.runAsNonRootbooltrue
deployment.envoyGateway.securityContext.runAsUserint65532
deployment.envoyGateway.securityContext.seccompProfile.typestring"RuntimeDefault"
deployment.pod.affinityobject{}
deployment.pod.annotations.“prometheus.io/port”string"19001"
deployment.pod.annotations.“prometheus.io/scrape”string"true"
deployment.pod.labelsobject{}
deployment.pod.nodeSelectorobject{}
deployment.pod.tolerationslist[]
deployment.pod.topologySpreadConstraintslist[]
deployment.ports[0].namestring"grpc"
deployment.ports[0].portint18000
deployment.ports[0].targetPortint18000
deployment.ports[1].namestring"ratelimit"
deployment.ports[1].portint18001
deployment.ports[1].targetPortint18001
deployment.ports[2].namestring"wasm"
deployment.ports[2].portint18002
deployment.ports[2].targetPortint18002
deployment.ports[3].namestring"metrics"
deployment.ports[3].portint19001
deployment.ports[3].targetPortint19001
deployment.priorityClassNamestringnil
deployment.replicasint1
global.images.envoyGateway.imagestringnil
global.images.envoyGateway.pullPolicystringnil
global.images.envoyGateway.pullSecretslist[]
global.images.ratelimit.imagestring"docker.io/envoyproxy/ratelimit:master"
global.images.ratelimit.pullPolicystring"IfNotPresent"
global.images.ratelimit.pullSecretslist[]
kubernetesClusterDomainstring"cluster.local"
podDisruptionBudget.minAvailableint0
service.annotationsobject{}

7 - Migrating from Ingress Resources

Introduction

Migrating from Ingress to Envoy Gateway involves converting existing Ingress resources into resources compatible with Envoy Gateway. The ingress2gateway tool simplifies this migration by transforming Ingress resources into Gateway API resources that Envoy Gateway can use. This guide will walk you through the prerequisites, installation of the ingress2gateway tool, and provide an example migration process.

Prerequisites

Before you start the migration, ensure you have the following:

  1. Envoy Gateway Installed: You need Envoy Gateway set up in your Kubernetes cluster. Follow the Envoy Gateway installation guide for details.
  2. Kubernetes Cluster Access: Ensure you have access to your Kubernetes cluster and necessary permissions to manage resources.
  3. Installation of ingress2gateway Tool: You need to install the ingress2gateway tool in your Kubernetes cluster and configure it accordingly. Follow the ingress2gateway tool installation guide for details.

Example Migration

Here’s a step-by-step example of migrating from Ingress to Envoy Gateway using ingress2gateway:

1. Install and Configure Envoy Gateway

Ensure that Envoy Gateway is installed and running in your cluster. Follow the official Envoy Gateway installation guide for setup instructions.

2. Create a GatewayClass

To ensure the generated HTTPRoutes are programmed correctly in the Envoy Gateway data plane, create a GatewayClass that links to the Envoy Gateway controller.

Create a GatewayClass resource:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
  name: envoy-gateway-class
spec:
  controllerName: gateway.envoyproxy.io/controller

Apply this resource:

kubectl apply -f gatewayclass.yaml

3. Install Ingress2gateway

Ensure you have the Ingress2gateway package installed. If not, follow the package’s installation instructions.

4. Run Ingress2gateway

Use Ingress2gateway to read your existing Ingress resources and translate them into Gateway API resources.

./ingress2gateway print

This command will:

  1. Read your Kube config file to extract the cluster credentials and the current active namespace.
  2. Search for Ingress and provider-specific resources in that namespace.
  3. Convert them to Gateway API resources (Gateways and HTTPRoutes).

Example Ingress Configuration

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /foo
        pathType: Prefix
        backend:
          service:
            name: foo-service
            port:
              number: 80

5. Save the Output

The command will output the equivalent Gateway API resources in YAML/JSON format to stdout. Save this output to a file for further use.

./ingress2gateway print > gateway-resources.yaml

6. Apply the Translated Resources

Apply the translated Gateway API resources to your cluster.

kubectl apply -f gateway-resources.yaml

7. Create a Gateway Resource

Create a Gateway resource specifying the GatewayClass created earlier and including the necessary listeners.

apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
  name: example-gateway
  namespace: default
spec:
  gatewayClassName: envoy-gateway-class
  listeners:
  - name: http
    protocol: HTTP
    port: 80
    hostname: example.com

Apply this resource:

kubectl apply -f gateway.yaml

8. Validate the Migration

Ensure the HTTPRoutes and Gateways are correctly set up and that traffic is being routed as expected. Validate the new configuration by checking the status of the Gateway and HTTPRoute resources.

kubectl get gateways
kubectl get httproutes

9. Monitor and Troubleshoot

Monitor the Envoy Gateway logs and metrics to ensure everything is functioning correctly. Troubleshoot any issues by reviewing the Gateway and HTTPRoute statuses and Envoy Gateway controller logs.

Summary

By following this guide, users can effectively migrate their existing Ingress resources to Envoy Gateway using the Ingress2gateway package. Creating a GatewayClass and linking it to the Envoy Gateway controller ensures that the translated resources are properly programmed in the data plane, providing a seamless transition to the Envoy Gateway environment.