This is the multi-page printable view of this section. Click here to print.
安装
1 - 使用 Helm 安装
Helm 是 Kubernetes 的包管理器,可自动在 Kubernetes 上发布和管理软件。
Envoy Gateway 可以通过 Helm Chart 经过几个简单的步骤进行安装, 具体取决于您是首次部署、从现有安装升级 Envoy Gateway 还是从 Envoy Gateway 迁移。
开始之前
兼容性矩阵
请参阅版本兼容性矩阵了解更多信息。Envoy Gateway Helm Chart 托管在 DockerHub 中。
它发布在 oci://docker.io/envoyproxy/gateway-helm
。
使用 Helm 安装
Envoy Gateway 通常从命令行部署到 Kubernetes。如果您没有 Kubernetes,则应该使用 kind
来创建一个。
开发者指南
请参阅开发者指南了解更多信息。安装 Gateway API CRD 和 Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace
等待 Envoy Gateway 变为可用:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
安装 GatewayClass、Gateway、HTTPRoute 和示例应用程序:
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/quickstart.yaml -n default
注意:quickstart.yaml
定义 Envoy Gateway 将侦听 80 端口及其全局可路由 IP 地址的流量,
以便轻松使用浏览器测试 Envoy Gateway。当 Envoy Gateway 发现其侦听器正在使用特权端口(<1024)时,
它会在内部将其映射到非特权端口,以便 Envoy Gateway 不需要额外的特权。了解此映射很重要,因为您在调试时可能需要考虑它。
自定义 Helm Chart
下面是使用 helm install 命令进行 Envoy Gateway 安装的一些快速方法。
增加副本数
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace --set deployment.replicas=2
更改 kubernetesClusterDomain 名称
如果您使用不同的域名安装了集群,则可以使用以下命令。
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace --set kubernetesClusterDomain=<domain name>
**注意:**以上是我们可以直接用于自定义安装的一些方法。但如果您正在寻找更复杂的更改, values.yaml 可以帮助您。
使用 values.yaml 文件进行复杂安装
deployment:
envoyGateway:
resources:
limits:
cpu: 700m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
ports:
- name: grpc
port: 18005
targetPort: 18000
- name: ratelimit
port: 18006
targetPort: 18001
config:
envoyGateway:
logging:
level:
default: debug
在这里,我们对 value.yaml 文件进行了三处更改。将 CPU 的资源限制增加到 700m
,
将 gRPC 的端口更改为 18005
,将限流端口更改为 18006
,并将日志记录级别更新为 debug
。
您可以通过以下命令使用 value.yaml 文件安装 Envoy Gateway。
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace -f values.yaml
Helm Chart Values
如果您想了解 values.yaml 文件中的所有可用字段,请参阅 Helm Chart Values。开放端口
这些是 Envoy Gateway 和托管 Envoy 代理使用的端口。
Envoy Gateway
Envoy Gateway | 地址 | 端口 | 是否可配置 |
---|---|---|---|
Xds EnvoyProxy Server | 0.0.0.0 | 18000 | No |
Xds RateLimit Server | 0.0.0.0 | 18001 | No |
Admin Server | 127.0.0.1 | 19000 | Yes |
Metrics Server | 0.0.0.0 | 19001 | No |
Health Check | 127.0.0.1 | 8081 | No |
EnvoyProxy
Envoy Proxy | 地址 | 端口 |
---|---|---|
Admin Server | 127.0.0.1 | 19000 |
Heath Check | 0.0.0.0 | 19001 |
后续步骤
Envoy Gateway 现在应该已成功安装并运行。要体验 Envoy Gateway 的更多功能,请参阅任务。2 - 使用 Kubernetes YAML 安装
此任务将引导您完成在 Kubernetes 集群中安装 Envoy Gateway。
手动安装过程不允许像 Helm 安装方法那样对配置进行更多控制, 因此如果您需要对 Envoy Gateway 安装进行更多控制,建议您使用 Helm。
开始之前
Envoy Gateway 设计为在 Kubernetes 中运行以进行生产。最重要的要求是:
- Kubernetest 1.25+ 版本
kubectl
命令行工具
兼容性矩阵
请参阅版本兼容性矩阵了解更多信息。使用 YAML 安装
Envoy Gateway 通常从命令行部署到 Kubernetes。如果您没有 Kubernetes,则应该使用 kind
来创建一个。
开发者指南
请参阅开发者指南了解更多信息。在终端中,运行以下命令:
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/install.yaml
后续步骤
Envoy Gateway 现在应该已成功安装并运行,但是为了体验 Envoy Gateway 的更多功能,您可以参考任务。
3 - 使用自定义证书的控制平面身份验证
Envoy Gateway 为 Envoy Gateway Pod 和 Envoy 代理队列之间的控制平面通信建立了安全的 TLS 连接。 此处使用的 TLS 证书是自签名的,并使用在创建 Envoy Gateway 之前运行的 Job 生成, 并且这些证书被安装到 Envoy Gateway 和 Envoy 代理 Pod 上。
此任务将引导您完成为控制平面身份验证配置自定义证书。
开始之前
我们使用 Cert-Manager 来管理证书。 您可以按照官方指南安装它。
为控制平面配置自定义证书
首先您需要设置 CA 颁发者,在此任务中,我们以
selfsigned-issuer
为例。您不应在生产中使用自签名颁发者,您应该使用真实的 CA 颁发者。
cat <<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Issuer metadata: labels: app.kubernetes.io/name: envoy-gateway name: selfsigned-issuer namespace: envoy-gateway-system spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: envoy-gateway-ca namespace: envoy-gateway-system spec: isCA: true commonName: envoy-gateway secretName: envoy-gateway-ca privateKey: algorithm: RSA size: 2048 issuerRef: name: selfsigned-issuer kind: Issuer group: cert-manager.io --- apiVersion: cert-manager.io/v1 kind: Issuer metadata: labels: app.kubernetes.io/name: envoy-gateway name: eg-issuer namespace: envoy-gateway-system spec: ca: secretName: envoy-gateway-ca EOF
为 Envoy Gateway 控制器创建一个证书,该证书将存储在
envoy-gatewy
Secret 中。cat<<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: labels: app.kubernetes.io/name: envoy-gateway name: envoy-gateway namespace: envoy-gateway-system spec: commonName: envoy-gateway dnsNames: - "envoy-gateway" - "envoy-gateway.envoy-gateway-system" - "envoy-gateway.envoy-gateway-system.svc" - "envoy-gateway.envoy-gateway-system.svc.cluster.local" issuerRef: kind: Issuer name: eg-issuer usages: - "digital signature" - "data encipherment" - "key encipherment" - "content commitment" secretName: envoy-gateway EOF
为 Envoy 代理创建一个证书,该证书将存储在
envoy
Secret 中。cat<<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: labels: app.kubernetes.io/name: envoy-gateway name: envoy namespace: envoy-gateway-system spec: commonName: "*" dnsNames: - "*.envoy-gateway-system" issuerRef: kind: Issuer name: eg-issuer usages: - "digital signature" - "data encipherment" - "key encipherment" - "content commitment" secretName: envoy EOF
创建限流证书,该证书将存储在
envoy-rate-limit
Secret 中。cat<<EOF | kubectl apply -f - apiVersion: cert-manager.io/v1 kind: Certificate metadata: labels: app.kubernetes.io/name: envoy-gateway name: envoy-rate-limit namespace: envoy-gateway-system spec: commonName: "*" dnsNames: - "*.envoy-gateway-system" issuerRef: kind: Issuer name: eg-issuer usages: - "digital signature" - "data encipherment" - "key encipherment" - "content commitment" secretName: envoy-rate-limit EOF
现在您可以按照 helm Chart 安装指南使用自定义证书安装 Envoy Gateway。
4 - Gateway Addons Helm Chart
An Add-ons Helm chart for Envoy Gateway
Homepage: https://gateway.envoyproxy.io/
Maintainers
Name | Url | |
---|---|---|
envoy-gateway-steering-committee | https://github.com/envoyproxy/gateway/blob/main/GOVERNANCE.md | |
envoy-gateway-maintainers | https://github.com/envoyproxy/gateway/blob/main/CODEOWNERS |
Source Code
Requirements
Repository | Name | Version |
---|---|---|
https://fluent.github.io/helm-charts | fluent-bit | 0.30.4 |
https://grafana.github.io/helm-charts | alloy | 0.9.2 |
https://grafana.github.io/helm-charts | grafana | 8.0.0 |
https://grafana.github.io/helm-charts | loki | 4.8.0 |
https://grafana.github.io/helm-charts | tempo | 1.3.1 |
https://open-telemetry.github.io/opentelemetry-helm-charts | opentelemetry-collector | 0.108.0 |
https://prometheus-community.github.io/helm-charts | prometheus | 25.21.0 |
Values
Key | Type | Default | Description |
---|---|---|---|
alloy.alloy.configMap.content | string | "// Write your Alloy config here:\nlogging {\n level = \"info\"\n format = \"logfmt\"\n}\nloki.write \"alloy\" {\n endpoint {\n url = \"http://loki.monitoring.svc:3100/loki/api/v1/push\"\n }\n}\n// discovery.kubernetes allows you to find scrape targets from Kubernetes resources.\n// It watches cluster state and ensures targets are continually synced with what is currently running in your cluster.\ndiscovery.kubernetes \"pod\" {\n role = \"pod\"\n}\n\n// discovery.relabel rewrites the label set of the input targets by applying one or more relabeling rules.\n// If no rules are defined, then the input targets are exported as-is.\ndiscovery.relabel \"pod_logs\" {\n targets = discovery.kubernetes.pod.targets\n\n // Label creation - \"namespace\" field from \"__meta_kubernetes_namespace\"\n rule {\n source_labels = [\"__meta_kubernetes_namespace\"]\n action = \"replace\"\n target_label = \"namespace\"\n }\n\n // Label creation - \"pod\" field from \"__meta_kubernetes_pod_name\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_name\"]\n action = \"replace\"\n target_label = \"pod\"\n }\n\n // Label creation - \"container\" field from \"__meta_kubernetes_pod_container_name\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_container_name\"]\n action = \"replace\"\n target_label = \"container\"\n }\n\n // Label creation - \"app\" field from \"__meta_kubernetes_pod_label_app_kubernetes_io_name\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_label_app_kubernetes_io_name\"]\n action = \"replace\"\n target_label = \"app\"\n }\n\n // Label creation - \"job\" field from \"__meta_kubernetes_namespace\" and \"__meta_kubernetes_pod_container_name\"\n // Concatenate values __meta_kubernetes_namespace/__meta_kubernetes_pod_container_name\n rule {\n source_labels = [\"__meta_kubernetes_namespace\", \"__meta_kubernetes_pod_container_name\"]\n action = \"replace\"\n target_label = \"job\"\n separator = \"/\"\n replacement = \"$1\"\n }\n\n // Label creation - \"container\" field from \"__meta_kubernetes_pod_uid\" and \"__meta_kubernetes_pod_container_name\"\n // Concatenate values __meta_kubernetes_pod_uid/__meta_kubernetes_pod_container_name.log\n rule {\n source_labels = [\"__meta_kubernetes_pod_uid\", \"__meta_kubernetes_pod_container_name\"]\n action = \"replace\"\n target_label = \"__path__\"\n separator = \"/\"\n replacement = \"/var/log/pods/*$1/*.log\"\n }\n\n // Label creation - \"container_runtime\" field from \"__meta_kubernetes_pod_container_id\"\n rule {\n source_labels = [\"__meta_kubernetes_pod_container_id\"]\n action = \"replace\"\n target_label = \"container_runtime\"\n regex = \"^(\\\\S+):\\\\/\\\\/.+$\"\n replacement = \"$1\"\n }\n}\n\n// loki.source.kubernetes tails logs from Kubernetes containers using the Kubernetes API.\nloki.source.kubernetes \"pod_logs\" {\n targets = discovery.relabel.pod_logs.output\n forward_to = [loki.process.pod_logs.receiver]\n}\n// loki.process receives log entries from other Loki components, applies one or more processing stages,\n// and forwards the results to the list of receivers in the component’s arguments.\nloki.process \"pod_logs\" {\n stage.static_labels {\n values = {\n cluster = \"envoy-gateway\",\n }\n }\n\n forward_to = [loki.write.alloy.receiver]\n}" | |
alloy.enabled | bool | false | |
alloy.fullnameOverride | string | "alloy" | |
fluent-bit.config.filters | string | "[FILTER]\n Name kubernetes\n Match kube.*\n Merge_Log On\n Keep_Log Off\n K8S-Logging.Parser On\n K8S-Logging.Exclude On\n\n[FILTER]\n Name grep\n Match kube.*\n Regex $kubernetes['container_name'] ^envoy$\n\n[FILTER]\n Name parser\n Match kube.*\n Key_Name log\n Parser envoy\n Reserve_Data True\n" | |
fluent-bit.config.inputs | string | "[INPUT]\n Name tail\n Path /var/log/containers/*.log\n multiline.parser docker, cri\n Tag kube.*\n Mem_Buf_Limit 5MB\n Skip_Long_Lines On\n" | |
fluent-bit.config.outputs | string | "[OUTPUT]\n Name loki\n Match kube.*\n Host loki.monitoring.svc.cluster.local\n Port 3100\n Labels job=fluentbit, app=$kubernetes['labels']['app'], k8s_namespace_name=$kubernetes['namespace_name'], k8s_pod_name=$kubernetes['pod_name'], k8s_container_name=$kubernetes['container_name']\n" | |
fluent-bit.config.service | string | "[SERVICE]\n Daemon Off\n Flush {{ .Values.flush }}\n Log_Level {{ .Values.logLevel }}\n Parsers_File parsers.conf\n Parsers_File custom_parsers.conf\n HTTP_Server On\n HTTP_Listen 0.0.0.0\n HTTP_Port {{ .Values.metricsPort }}\n Health_Check On\n" | |
fluent-bit.enabled | bool | true | |
fluent-bit.fullnameOverride | string | "fluent-bit" | |
fluent-bit.image.repository | string | "fluent/fluent-bit" | |
fluent-bit.podAnnotations.“fluentbit.io/exclude” | string | "true" | |
fluent-bit.podAnnotations.“prometheus.io/path” | string | "/api/v1/metrics/prometheus" | |
fluent-bit.podAnnotations.“prometheus.io/port” | string | "2020" | |
fluent-bit.podAnnotations.“prometheus.io/scrape” | string | "true" | |
fluent-bit.testFramework.enabled | bool | false | |
grafana.adminPassword | string | "admin" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.apiVersion | int | 1 | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].disableDeletion | bool | false | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].editable | bool | true | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].folder | string | "envoy-gateway" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].name | string | "envoy-gateway" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].options.path | string | "/var/lib/grafana/dashboards/envoy-gateway" | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].orgId | int | 1 | |
grafana.dashboardProviders.“dashboardproviders.yaml”.providers[0].type | string | "file" | |
grafana.dashboardsConfigMaps.envoy-gateway | string | "grafana-dashboards" | |
grafana.datasources.“datasources.yaml”.apiVersion | int | 1 | |
grafana.datasources.“datasources.yaml”.datasources[0].name | string | "Prometheus" | |
grafana.datasources.“datasources.yaml”.datasources[0].type | string | "prometheus" | |
grafana.datasources.“datasources.yaml”.datasources[0].url | string | "http://prometheus" | |
grafana.enabled | bool | true | |
grafana.fullnameOverride | string | "grafana" | |
grafana.service.type | string | "LoadBalancer" | |
grafana.testFramework.enabled | bool | false | |
loki.backend.replicas | int | 0 | |
loki.deploymentMode | string | "SingleBinary" | |
loki.enabled | bool | true | |
loki.fullnameOverride | string | "loki" | |
loki.gateway.enabled | bool | false | |
loki.loki.auth_enabled | bool | false | |
loki.loki.commonConfig.replication_factor | int | 1 | |
loki.loki.compactorAddress | string | "loki" | |
loki.loki.memberlist | string | "loki-memberlist" | |
loki.loki.rulerConfig.storage.type | string | "local" | |
loki.loki.storage.type | string | "filesystem" | |
loki.monitoring.lokiCanary.enabled | bool | false | |
loki.monitoring.selfMonitoring.enabled | bool | false | |
loki.monitoring.selfMonitoring.grafanaAgent.installOperator | bool | false | |
loki.read.replicas | int | 0 | |
loki.singleBinary.replicas | int | 1 | |
loki.test.enabled | bool | false | |
loki.write.replicas | int | 0 | |
opentelemetry-collector.config.exporters.debug.verbosity | string | "detailed" | |
opentelemetry-collector.config.exporters.loki.endpoint | string | "http://loki.monitoring.svc:3100/loki/api/v1/push" | |
opentelemetry-collector.config.exporters.otlp.endpoint | string | "tempo.monitoring.svc:4317" | |
opentelemetry-collector.config.exporters.otlp.tls.insecure | bool | true | |
opentelemetry-collector.config.exporters.prometheus.endpoint | string | "[${env:MY_POD_IP}]:19001" | |
opentelemetry-collector.config.extensions.health_check.endpoint | string | "[${env:MY_POD_IP}]:13133" | |
opentelemetry-collector.config.processors.attributes.actions[0].action | string | "insert" | |
opentelemetry-collector.config.processors.attributes.actions[0].key | string | "loki.attribute.labels" | |
opentelemetry-collector.config.processors.attributes.actions[0].value | string | "k8s.pod.name, k8s.namespace.name" | |
opentelemetry-collector.config.receivers.datadog.endpoint | string | "[${env:MY_POD_IP}]:8126" | |
opentelemetry-collector.config.receivers.jaeger.protocols.grpc.endpoint | string | "[${env:MY_POD_IP}]:14250" | |
opentelemetry-collector.config.receivers.jaeger.protocols.thrift_compact.endpoint | string | "[${env:MY_POD_IP}]:6831" | |
opentelemetry-collector.config.receivers.jaeger.protocols.thrift_http.endpoint | string | "[${env:MY_POD_IP}]:14268" | |
opentelemetry-collector.config.receivers.otlp.protocols.grpc.endpoint | string | "[${env:MY_POD_IP}]:4317" | |
opentelemetry-collector.config.receivers.otlp.protocols.http.endpoint | string | "[${env:MY_POD_IP}]:4318" | |
opentelemetry-collector.config.receivers.prometheus.config.scrape_configs[0].job_name | string | "opentelemetry-collector" | |
opentelemetry-collector.config.receivers.prometheus.config.scrape_configs[0].scrape_interval | string | "10s" | |
opentelemetry-collector.config.receivers.prometheus.config.scrape_configs[0].static_configs[0].targets[0] | string | "[${env:MY_POD_IP}]:8888" | |
opentelemetry-collector.config.receivers.zipkin.endpoint | string | "[${env:MY_POD_IP}]:9411" | |
opentelemetry-collector.config.service.extensions[0] | string | "health_check" | |
opentelemetry-collector.config.service.pipelines.logs.exporters[0] | string | "loki" | |
opentelemetry-collector.config.service.pipelines.logs.processors[0] | string | "attributes" | |
opentelemetry-collector.config.service.pipelines.logs.receivers[0] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.metrics.exporters[0] | string | "prometheus" | |
opentelemetry-collector.config.service.pipelines.metrics.receivers[0] | string | "datadog" | |
opentelemetry-collector.config.service.pipelines.metrics.receivers[1] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.traces.exporters[0] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.traces.receivers[0] | string | "datadog" | |
opentelemetry-collector.config.service.pipelines.traces.receivers[1] | string | "otlp" | |
opentelemetry-collector.config.service.pipelines.traces.receivers[2] | string | "zipkin" | |
opentelemetry-collector.config.service.telemetry.metrics.address | string | "[${env:MY_POD_IP}]:8888" | |
opentelemetry-collector.enabled | bool | false | |
opentelemetry-collector.fullnameOverride | string | "otel-collector" | |
opentelemetry-collector.image.repository | string | "otel/opentelemetry-collector-contrib" | |
opentelemetry-collector.mode | string | "deployment" | |
prometheus.alertmanager.enabled | bool | false | |
prometheus.enabled | bool | true | |
prometheus.kube-state-metrics.enabled | bool | false | |
prometheus.prometheus-node-exporter.enabled | bool | false | |
prometheus.prometheus-pushgateway.enabled | bool | false | |
prometheus.server.fullnameOverride | string | "prometheus" | |
prometheus.server.global.scrape_interval | string | "15s" | |
prometheus.server.image.repository | string | "prom/prometheus" | |
prometheus.server.persistentVolume.enabled | bool | false | |
prometheus.server.readinessProbeInitialDelay | int | 0 | |
prometheus.server.securityContext | object | {} | |
prometheus.server.service.type | string | "LoadBalancer" | |
tempo.enabled | bool | true | |
tempo.fullnameOverride | string | "tempo" | |
tempo.service.type | string | "LoadBalancer" |
5 - Gateway Helm Chart
The Helm chart for Envoy Gateway
Homepage: https://gateway.envoyproxy.io/
Maintainers
Name | Url | |
---|---|---|
envoy-gateway-steering-committee | https://github.com/envoyproxy/gateway/blob/main/GOVERNANCE.md | |
envoy-gateway-maintainers | https://github.com/envoyproxy/gateway/blob/main/CODEOWNERS |
Source Code
Values
Key | Type | Default | Description |
---|---|---|---|
certgen | object | {"job":{"affinity":{},"annotations":{},"nodeSelector":{},"resources":{},"securityContext":{"allowPrivilegeEscalation":false,"capabilities":{"drop":["ALL"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsGroup":65534,"runAsNonRoot":true,"runAsUser":65534,"seccompProfile":{"type":"RuntimeDefault"}},"tolerations":[],"ttlSecondsAfterFinished":30},"rbac":{"annotations":{},"labels":{}}} | Certgen is used to generate the certificates required by EnvoyGateway. If you want to construct a custom certificate, you can generate a custom certificate through Cert-Manager before installing EnvoyGateway. Certgen will not overwrite the custom certificate. Please do not manually modify values.yaml to disable certgen, it may cause EnvoyGateway OIDC,OAuth2,etc. to not work as expected. |
config.envoyGateway.gateway.controllerName | string | "gateway.envoyproxy.io/gatewayclass-controller" | |
config.envoyGateway.logging.level.default | string | "info" | |
config.envoyGateway.provider.type | string | "Kubernetes" | |
createNamespace | bool | false | |
deployment.envoyGateway.image.repository | string | "" | |
deployment.envoyGateway.image.tag | string | "" | |
deployment.envoyGateway.imagePullPolicy | string | "" | |
deployment.envoyGateway.imagePullSecrets | list | [] | |
deployment.envoyGateway.resources.limits.memory | string | "1024Mi" | |
deployment.envoyGateway.resources.requests.cpu | string | "100m" | |
deployment.envoyGateway.resources.requests.memory | string | "256Mi" | |
deployment.envoyGateway.securityContext.allowPrivilegeEscalation | bool | false | |
deployment.envoyGateway.securityContext.capabilities.drop[0] | string | "ALL" | |
deployment.envoyGateway.securityContext.privileged | bool | false | |
deployment.envoyGateway.securityContext.runAsGroup | int | 65532 | |
deployment.envoyGateway.securityContext.runAsNonRoot | bool | true | |
deployment.envoyGateway.securityContext.runAsUser | int | 65532 | |
deployment.envoyGateway.securityContext.seccompProfile.type | string | "RuntimeDefault" | |
deployment.pod.affinity | object | {} | |
deployment.pod.annotations.“prometheus.io/port” | string | "19001" | |
deployment.pod.annotations.“prometheus.io/scrape” | string | "true" | |
deployment.pod.labels | object | {} | |
deployment.pod.nodeSelector | object | {} | |
deployment.pod.tolerations | list | [] | |
deployment.pod.topologySpreadConstraints | list | [] | |
deployment.ports[0].name | string | "grpc" | |
deployment.ports[0].port | int | 18000 | |
deployment.ports[0].targetPort | int | 18000 | |
deployment.ports[1].name | string | "ratelimit" | |
deployment.ports[1].port | int | 18001 | |
deployment.ports[1].targetPort | int | 18001 | |
deployment.ports[2].name | string | "wasm" | |
deployment.ports[2].port | int | 18002 | |
deployment.ports[2].targetPort | int | 18002 | |
deployment.ports[3].name | string | "metrics" | |
deployment.ports[3].port | int | 19001 | |
deployment.ports[3].targetPort | int | 19001 | |
deployment.priorityClassName | string | nil | |
deployment.replicas | int | 1 | |
global.images.envoyGateway.image | string | nil | |
global.images.envoyGateway.pullPolicy | string | nil | |
global.images.envoyGateway.pullSecrets | list | [] | |
global.images.ratelimit.image | string | "docker.io/envoyproxy/ratelimit:master" | |
global.images.ratelimit.pullPolicy | string | "IfNotPresent" | |
global.images.ratelimit.pullSecrets | list | [] | |
kubernetesClusterDomain | string | "cluster.local" | |
podDisruptionBudget.minAvailable | int | 0 | |
service.annotations | object | {} |
6 - 兼容性表格
Envoy Gateway 依赖于 Envoy Proxy 和 Gateway API,并在 Kubernetes 集群中运行。 这些产品的所有版本并非都可以与 Envoy Gateway 一起运行。下面列出了支持的版本组合; 粗体类型表示实际编译到每个 Envoy Gateway 版本中的 Envoy Proxy 和 Gateway API 的版本。
Envoy Gateway 版本 | Envoy Proxy 版本 | Rate Limit 版本 | Gateway API 版本 | Kubernetes 版本 |
---|---|---|---|---|
v1.0.0 | distroless-v1.29.2 | 19f2079f | v1.0.0 | v1.26, v1.27, v1.28, v1.29 |
v0.6.0 | distroless-v1.28-latest | b9796237 | v1.0.0 | v1.26, v1.27, v1.28 |
v0.5.0 | v1.27-latest | e059638d | v0.7.1 | v1.25, v1.26, v1.27 |
v0.4.0 | v1.26-latest | 542a6047 | v0.6.2 | v1.25, v1.26, v1.27 |
v0.3.0 | v1.25-latest | f28024e3 | v0.6.1 | v1.24, v1.25, v1.26 |
v0.2.0 | v1.23-latest | v0.5.1 | v1.24 | |
latest | dev-latest | master | v1.0.0 | v1.26, v1.27, v1.28, v1.29 |