Welcome to Envoy Gateway
Envoy Gateway Documents
Note
This project is under
active development. Many features are not complete. We would love for you to
Get Involved!
Envoy Gateway is an open source project for managing Envoy Proxy as a standalone or Kubernetes-based application
gateway. Gateway API resources are used to dynamically provision and configure the managed Envoy Proxies.
Ready to get started?
1 - Design
This section includes Designs of Envoy Gateway.
1.1 - Goals
The high-level goal of the Envoy Gateway project is to attract more users to Envoy by lowering barriers to adoption
through expressive, extensible, role-oriented APIs that support a multitude of ingress and L7/L4 traffic routing
use cases; and provide a common foundation for vendors to build value-added products without having to re-engineer
fundamental interactions.
Objectives
Expressive API
The Envoy Gateway project will expose a simple and expressive API, with defaults set for many capabilities.
The API will be the Kubernetes-native Gateway API, plus Envoy-specific extensions and extension points. This
expressive and familiar API will make Envoy accessible to more users, especially application developers, and make Envoy
a stronger option for “getting started” as compared to other proxies. Application developers will use the API out of
the box without needing to understand in-depth concepts of Envoy Proxy or use OSS wrappers. The API will use familiar
nouns that users understand.
The core full-featured Envoy xDS APIs will remain available for those who need more capability and for those who
add functionality on top of Envoy Gateway, such as commercial API gateway products.
This expressive API will not be implemented by Envoy Proxy, but rather an officially supported translation layer
on top.
Batteries included
Envoy Gateway will simplify how Envoy is deployed and managed, allowing application developers to focus on
delivering core business value.
The project plans to include additional infrastructure components required by users to fulfill their Ingress and API
gateway needs: It will handle Envoy infrastructure provisioning (e.g. Kubernetes Service, Deployment, et cetera), and
possibly infrastructure provisioning of related sidecar services. It will include sensible defaults with the ability to
override. It will include channels for improving ops by exposing status through API conditions and Kubernetes status
sub-resources.
Making an application accessible needs to be a trivial task for any developer. Similarly, infrastructure administrators
will enjoy a simplified management model that doesn’t require extensive knowledge of the solution’s architecture to
operate.
All environments
Envoy Gateway will support running natively in Kubernetes environments as well as non-Kubernetes deployments.
Initially, Kubernetes will receive the most focus, with the aim of having Envoy Gateway become the de facto
standard for Kubernetes ingress supporting the Gateway API.
Additional goals include multi-cluster support and various runtime environments.
Extensibility
Vendors will have the ability to provide value-added products built on the Envoy Gateway foundation.
It will remain easy for end-users to leverage common Envoy Proxy extension points such as providing an implementation
for authentication methods and rate-limiting. For advanced use cases, users will have the ability to use the full power
of xDS.
Since a general-purpose API cannot address all use cases, Envoy Gateway will provide additional extension points
for flexibility. As such, Envoy Gateway will form the base of vendor-provided managed control plane solutions,
allowing vendors to shift to a higher management plane layer.
Non-objectives
Cannibalize vendor models
Vendors need to have the ability to drive commercial value, so the goal is not to cannibalize any existing vendor
monetization model, though some vendors may be affected by it.
Disrupt current Envoy usage patterns
Envoy Gateway is purely an additive convenience layer and is not meant to disrupt any usage pattern of any user
with Envoy Proxy, xDS, or go-control-plane.
Personas
In order of priority
1. Application developer
The application developer spends the majority of their time developing business logic code. They require the ability to
manage access to their application.
2. Infrastructure administrators
The infrastructure administrators are responsible for the installation, maintenance, and operation of
API gateways appliances in infrastructure, such as CRDs, roles, service accounts, certificates, etc.
Infrastructure administrators support the needs of application developers by managing instances of Envoy Gateway.
1.2 - System Design
Goals
- Define the system components needed to satisfy the requirements of Envoy Gateway.
Non-Goals
- Create a detailed design and interface specification for each system component.
Terminology
- Control Plane- A collection of inter-related software components for providing application gateway and routing
functionality. The control plane is implemented by Envoy Gateway and provides services for managing the data plane.
These services are detailed in the components section.
- Data Plane- Provides intelligent application-level traffic routing and is implemented as one or more Envoy proxies.
Architecture
Configuration
Envoy Gateway is configured statically at startup and the managed data plane is configured dynamically through
Kubernetes resources, primarily Gateway API objects.
Static Configuration
Static configuration is used to configure Envoy Gateway at startup, i.e. change the GatewayClass controllerName,
configure a Provider, etc. Currently, Envoy Gateway only supports configuration through a configuration file. If the
configuration file is not provided, Envoy Gateway starts-up with default configuration parameters.
Dynamic Configuration
Dynamic configuration is based on the concept of a declaring the desired state of the data plane and using
reconciliation loops to drive the actual state toward the desired state. The desired state of the data plane is
defined as Kubernetes resources that provide the following services:
- Infrastructure Management- Manage the data plane infrastructure, i.e. deploy, upgrade, etc. This configuration is
expressed through GatewayClass and Gateway resources. The
EnvoyProxy
Custom Resource can be
referenced by gatewayclass.spec.parametersRef
to modify data plane infrastructure default parameters,
e.g. expose Envoy network endpoints using a ClusterIP
service instead of a LoadBalancer
service. - Traffic Routing- Define how to handle application-level requests to backend services. For example, route all HTTP
requests for “www.example.com” to a backend service running a web server. This configuration is expressed through
HTTPRoute and TLSRoute resources that match, filter, and route traffic to a backend.
Although a backend can be any valid Kubernetes Group/Kind resource, Envoy Gateway only supports a Service
reference.
Components
Envoy Gateway is made up of several components that communicate in-process; how this communication happens is described
in the Watching Components Design.
Provider
A Provider is an infrastructure component that Envoy Gateway calls to establish its runtime configuration, resolve
services, persist data, etc. As of v0.2, Kubernetes is the only implemented provider. A file provider is on the roadmap
via Issue #37. Other providers can be added in the future as Envoy Gateway use cases are better understood. A
provider is configured at start up through Envoy Gateway’s static configuration.
Kubernetes Provider
- Uses Kubernetes-style controllers to reconcile Kubernetes resources that comprise the
dynamic configuration.
- Manages the data plane through Kubernetes API CRUD operations.
- Uses Kubernetes for Service discovery.
- Uses etcd (via Kubernetes API) to persist data.
File Provider
- Uses a file watcher to watch files in a directory that define the data plane configuration.
- Manages the data plane by calling internal APIs, e.g.
CreateDataPlane()
. - Uses the host’s DNS for Service discovery.
- If needed, the local filesystem is used to persist data.
Resource Watcher
The Resource Watcher watches resources used to establish and maintain Envoy Gateway’s dynamic configuration. The
mechanics for watching resources is provider-specific, e.g. informers, caches, etc. are used for the Kubernetes
provider. The Resource Watcher uses the configured provider for input and provides resources to the Resource Translator
as output.
Resource Translator
The Resource Translator translates external resources, e.g. GatewayClass, from the Resource Watcher to the Intermediate
Representation (IR). It is responsible for:
- Translating infrastructure-specific resources/fields from the Resource Watcher to the Infra IR.
- Translating proxy configuration resources/fields from the Resource Watcher to the xDS IR.
Note: The Resource Translator is implemented as the Translator
API type in the gatewayapi
package.
The Intermediate Representation defines internal data models that external resources are translated into. This allows
Envoy Gateway to be decoupled from the external resources used for dynamic configuration. The IR consists of an Infra IR
used as input for the Infra Manager and an xDS IR used as input for the xDS Translator.
- Infra IR- Used as the internal definition of the managed data plane infrastructure.
- xDS IR- Used as the internal definition of the managed data plane xDS configuration.
xDS Translator
The xDS Translator translates the xDS IR into xDS Resources that are consumed by the xDS server.
xDS Server
The xDS Server is a xDS gRPC Server based on Go Control Plane. Go Control Plane implements the Delta xDS Server
Protocol and is responsible for using xDS to configure the data plane.
Infra Manager
The Infra Manager is a provider-specific component responsible for managing the following infrastructure:
- Data Plane - Manages all the infrastructure required to run the managed Envoy proxies. For example, CRUD Deployment,
Service, etc. resources to run Envoy in a Kubernetes cluster.
- Auxiliary Control Planes - Optional infrastructure needed to implement application Gateway features that require
external integrations with the managed Envoy proxies. For example, Global Rate Limiting requires provisioning
and configuring the Envoy Rate Limit Service and the Rate Limit filter. Such features are exposed to
users through the Custom Route Filters extension.
The Infra Manager consumes the Infra IR as input to manage the data plane infrastructure.
Design Decisions
- Envoy Gateway consumes one GatewayClass by comparing its configured controller name with
spec.controllerName
of a GatewayClass. If multiple GatewayClasses exist with the same spec.controllerName
, Envoy
Gateway follows Gateway API guidelines to resolve the conflict.
gatewayclass.spec.parametersRef
refers to the EnvoyProxy
custom resource for configuring the managed proxy
infrastructure. If unspecified, default configuration parameters are used for the managed proxy infrastructure. - Envoy Gateway manages Gateways that reference its GatewayClass.
- A Gateway resource causes Envoy Gateway to provision managed Envoy proxy infrastructure.
- Envoy Gateway groups Listeners by Port and collapses each group of Listeners into a single Listener if the Listeners
in the group are compatible. Envoy Gateway considers Listeners to be compatible if all the following conditions are
met:
- Either each Listener within the group specifies the “HTTP” Protocol or each Listener within the group specifies
either the “HTTPS” or “TLS” Protocol.
- Each Listener within the group specifies a unique “Hostname”.
- As a special case, one Listener within a group may omit “Hostname”, in which case this Listener matches when no
other Listener matches.
- Envoy Gateway does not merge listeners across multiple Gateways.
- Envoy Gateway follows Gateway API guidelines to resolve any conflicts.
- A Gateway
listener
corresponds to an Envoy proxy Listener.
- An HTTPRoute resource corresponds to an Envoy proxy Route.
- The goal is to make Envoy Gateway components extensible in the future. See the roadmap for additional details.
The draft for this document is here.
1.3 - Watching Components Design
Envoy Gateway is made up of several components that communicate in-process. Some of them (namely Providers) watch
external resources, and “publish” what they see for other components to consume; others watch what another publishes and
act on it (such as the resource translator watches what the providers publish, and then publishes its own results that
are watched by another component). Some of these internally published results are consumed by multiple components.
To facilitate this communication use the watchable library. The watchable.Map
type is very similar to the
standard library’s sync.Map
type, but supports a .Subscribe
(and .SubscribeSubset
) method that promotes a pub/sub
pattern.
Pub
Many of the things we communicate around are naturally named, either by a bare “name” string or by a “name”/“namespace”
tuple. And because watchable.Map
is typed, it makes sense to have one map for each type of thing (very similar to if
we were using native Go map
s). For example, a struct that might be written to by the Kubernetes provider, and read by
the IR translator:
type ResourceTable struct {
// gateway classes are cluster-scoped; no namespace
GatewayClasses watchable.Map[string, *gwapiv1.GatewayClass]
// gateways are namespace-scoped, so use a k8s.io/apimachinery/pkg/types.NamespacedName as the map key.
Gateways watchable.Map[types.NamespacedName, *gwapiv1.Gateway]
HTTPRoutes watchable.Map[types.NamespacedName, *gwapiv1.HTTPRoute]
}
The Kubernetes provider updates the table by calling table.Thing.Store(name, val)
and table.Thing.Delete(name)
;
updating a map key with a value that is deep-equal (usually reflect.DeepEqual
, but you can implement your own .Equal
method) the current value is a no-op; it won’t trigger an event for subscribers. This is handy so that the publisher
doesn’t have as much state to keep track of; it doesn’t need to know “did I already publish this thing”, it can just
.Store
its data and watchable
will do the right thing.
Sub
Meanwhile, the translator and other interested components subscribe to it with table.Thing.Subscribe
(or
table.Thing.SubscribeSubset
if they only care about a few “Thing"s). So the translator goroutine might look like:
func(ctx context.Context) error {
for snapshot := range k8sTable.HTTPRoutes.Subscribe(ctx) {
fullState := irInput{
GatewayClasses: k8sTable.GatewayClasses.LoadAll(),
Gateways: k8sTable.Gateways.LoadAll(),
HTTPRoutes: snapshot.State,
}
translate(irInput)
}
}
Or, to watch multiple maps in the same loop:
func worker(ctx context.Context) error {
classCh := k8sTable.GatewayClasses.Subscribe(ctx)
gwCh := k8sTable.Gateways.Subscribe(ctx)
routeCh := k8sTable.HTTPRoutes.Subscribe(ctx)
for ctx.Err() == nil {
var arg irInput
select {
case snapshot := <-classCh:
arg.GatewayClasses = snapshot.State
case snapshot := <-gwCh:
arg.Gateways = snapshot.State
case snapshot := <-routeCh:
arg.Routes = snapshot.State
}
if arg.GateWayClasses == nil {
arg.GatewayClasses = k8sTable.GateWayClasses.LoadAll()
}
if arg.GateWays == nil {
arg.Gateways = k8sTable.GateWays.LoadAll()
}
if arg.HTTPRoutes == nil {
arg.HTTPRoutes = k8sTable.HTTPRoutes.LoadAll()
}
translate(irInput)
}
}
From the updates it gets from .Subscribe
, it can get a full view of the map being subscribed to via snapshot.State
;
but it must read the other maps explicitly. Like sync.Map
, watchable.Map
s are thread-safe; while .Subscribe
is a
handy way to know when to run, .Load
and friends can be used without subscribing.
There can be any number of subscribers. For that matter, there can be any number of publishers .Store
ing things, but
it’s probably wise to just have one publisher for each map.
The channel returned from .Subscribe
is immediately readable with a snapshot of the map as it existed when
.Subscribe
was called; and becomes readable again whenever .Store
or .Delete
mutates the map. If multiple
mutations happen between reads (or if mutations happen between .Subscribe
and the first read), they are coalesced in
to one snapshot to be read; the snapshot.State
is the most-recent full state, and snapshot.Updates
is a listing of
each of the mutations that cause this snapshot to be different than the last-read one. This way subscribers don’t need
to worry about a backlog accumulating if they can’t keep up with the rate of changes from the publisher.
If the map contains anything before .Subscribe
is called, that very first read won’t include snapshot.Updates
entries for those pre-existing items; if you are working with snapshot.Update
instead of snapshot.State
, then you
must add special handling for your first read. We have a utility function ./internal/message.HandleSubscription
to
help with this.
Other Notes
The common pattern will likely be that the entrypoint that launches the goroutines for each component instantiates the
map, and passes them to the appropriate publishers and subscribers; same as if they were communicating via a dumb
chan
.
A limitation of watchable.Map
is that in order to ensure safety between goroutines, it does require that value types
be deep-copiable; either by having a DeepCopy
method, being a proto.Message
, or by containing no reference types and
so can be deep-copied by naive assignment. Fortunately, we’re using controller-gen
anyway, and controller-gen
can
generate DeepCopy
methods for us: just stick a // +k8s:deepcopy-gen=true
on the types that you want it to generate
methods for.
1.4 - Gateway API Translator Design
The Gateway API translates external resources, e.g. GatewayClass, from the configured Provider to the Intermediate
Representation (IR).
Assumptions
Initially target core conformance features only, to be followed by extended conformance features.
The main inputs to the Gateway API translator are:
- GatewayClass, Gateway, HTTPRoute, TLSRoute, Service, ReferenceGrant, Namespace, and Secret resources.
Note: ReferenceGrant is not fully implemented as of v0.2.
The outputs of the Gateway API translator are:
- Xds and Infra Internal Representations (IRs).
- Status updates for GatewayClass, Gateways, HTTPRoutes
Listener Compatibility
Envoy Gateway follows Gateway API listener compatibility spec:
Each listener in a Gateway must have a unique combination of Hostname, Port, and Protocol. An implementation MAY group
Listeners by Port and then collapse each group of Listeners into a single Listener if the implementation determines
that the Listeners in the group are “compatible”.
Note: Envoy Gateway does not collapse listeners across multiple Gateways.
Listener Compatibility Examples
Example 1: Gateway with compatible Listeners (same port & protocol, different hostnames)
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: gateway-1
namespace: envoy-gateway
spec:
gatewayClassName: envoy-gateway
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
hostname: "*.envoygateway.io"
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
hostname: whales.envoygateway.io
Example 2: Gateway with compatible Listeners (same port & protocol, one hostname specified, one not)
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: gateway-1
namespace: envoy-gateway
spec:
gatewayClassName: envoy-gateway
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
hostname: "*.envoygateway.io"
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
Example 3: Gateway with incompatible Listeners (same port, protocol and hostname)
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: gateway-1
namespace: envoy-gateway
spec:
gatewayClassName: envoy-gateway
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
hostname: whales.envoygateway.io
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
hostname: whales.envoygateway.io
Example 4: Gateway with incompatible Listeners (neither specify a hostname)
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: gateway-1
namespace: envoy-gateway
spec:
gatewayClassName: envoy-gateway
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
Computing Status
Gateway API specifies a rich set of status fields & conditions for each resource. To achieve conformance, Envoy Gateway
must compute the appropriate status fields and conditions for managed resources.
Status is computed and set for:
- The managed GatewayClass (
gatewayclass.status.conditions
). - Each managed Gateway, based on its Listeners’ status (
gateway.status.conditions
). For the Kubernetes provider, the
Envoy Deployment and Service status are also included to calculate Gateway status. - Listeners for each Gateway (
gateway.status.listeners
). - The ParentRef for each Route (
route.status.parents
).
The Gateway API translator is responsible for calculating status conditions while translating Gateway API resources to
the IR and publishing status over the message bus. The Status Manager subscribes to these status messages and
updates the resource status using the configured provider. For example, the Status Manager uses a Kubernetes client to
update resource status on the Kubernetes API server.
Outline
The following roughly outlines the translation process. Each step may produce (1) IR; and (2) status updates on Gateway
API resources.
Process Gateway Listeners
- Validate unique hostnames, ports, and protocols.
- Validate and compute supported kinds.
- Validate allowed namespaces (validate selector if specified).
- Validate TLS fields if specified, including resolving referenced Secrets.
Process HTTPRoutes
- foreach route rule:
- compute matches
- [core] path exact, path prefix
- [core] header exact
- [extended] query param exact
- [extended] HTTP method
- compute filters
- [core] request header modifier (set/add/remove)
- [core] request redirect (hostname, statuscode)
- [extended] request mirror
- compute backends
- [core] Kubernetes services
- foreach route parent ref:
- get matching listeners (check Gateway, section name, listener validation status, listener allowed routes, hostname intersection)
- foreach matching listener:
- foreach hostname intersection with route:
- add each computed route rule to host
Context Structs
To help store, access and manipulate information as it’s processed during the translation process, a set of context
structs are used. These structs wrap a given Gateway API type, and add additional fields and methods to support
processing.
GatewayContext
wraps a Gateway and provides helper methods for setting conditions, accessing Listeners, etc.
type GatewayContext struct {
// The managed Gateway
*v1beta1.Gateway
// A list of Gateway ListenerContexts.
listeners []*ListenerContext
}
ListenerContext
wraps a Listener and provides helper methods for setting conditions and other status information on
the associated Gateway.
type ListenerContext struct {
// The Gateway listener.
*v1beta1.Listener
// The Gateway this Listener belongs to.
gateway *v1beta1.Gateway
// An index used for managing this listener in the list of Gateway listeners.
listenerStatusIdx int
// Only Routes in namespaces selected by the selector may be attached
// to the Gateway this listener belongs to.
namespaceSelector labels.Selector
// The TLS Secret for this Listener, if applicable.
tlsSecret *v1.Secret
}
RouteContext
represents a generic Route object (HTTPRoute, TLSRoute, etc.) that can reference Gateway objects.
type RouteContext interface {
client.Object
// GetRouteType returns the Kind of the Route object, HTTPRoute,
// TLSRoute, TCPRoute, UDPRoute etc.
GetRouteType() string
// GetHostnames returns the hosts targeted by the Route object.
GetHostnames() []string
// GetParentReferences returns the ParentReference of the Route object.
GetParentReferences() []v1beta1.ParentReference
// GetRouteParentContext returns RouteParentContext by using the Route
// objects' ParentReference.
GetRouteParentContext(forParentRef v1beta1.ParentReference) *RouteParentContext
}
1.5 - Control Plane Observability: Metrics
This document aims to cover all aspects of envoy gateway control plane metrics observability.
Note
Data plane observability (while important) is outside of scope for this document. For dataplane observability, refer to
here.
Current State
At present, the Envoy Gateway control plane provides logs and controller-runtime metrics, without traces. Logs are managed through our proprietary library (internal/logging
, a shim to zap
) and are written to /dev/stdout
.
Goals
Our objectives include:
- Supporting PULL mode for Prometheus metrics and exposing these metrics on the admin address.
- Supporting PUSH mode for Prometheus metrics, thereby sending metrics to the Open Telemetry Stats sink via gRPC or HTTP.
Non-Goals
Our non-goals include:
- Supporting other stats sinks.
Use-Cases
The use-cases include:
- Exposing Prometheus metrics in the Envoy Gateway Control Plane.
- Pushing Envoy Gateway Control Plane metrics via the Open Telemetry Sink.
Design
Standards
Our metrics, will be built upon the OpenTelemetry standards. All metrics will be configured via the OpenTelemetry SDK, which offers neutral libraries that can be connected to various backends.
This approach allows the Envoy Gateway code to concentrate on the crucial aspect - generating the metrics - and delegate all other tasks to systems designed for telemetry ingestion.
Attributes
OpenTelemetry defines a set of Semantic Conventions, including Kubernetes specific ones.
These attributes can be expressed in logs (as keys of structured logs), traces (as attributes), and metrics (as labels).
We aim to use attributes consistently where applicable. Where possible, these should adhere to codified Semantic Conventions; when not possible, they should maintain consistency across the project.
Extensibility
Envoy Gateway supports both PULL/PUSH mode metrics, with Metrics exported via Prometheus by default.
Additionally, Envoy Gateway can export metrics using both the OTEL gRPC metrics exporter and OTEL HTTP metrics exporter, which pushes metrics by grpc/http to a remote OTEL collector.
Users can extend these in two ways:
Downstream Collection
Based on the exported data, other tools can collect, process, and export telemetry as needed. Some examples include:
- Metrics in PULL mode: The OTEL collector can scrape Prometheus and export to X.
- Metrics in PUSH mode: The OTEL collector can receive OTEL gRPC/HTTP exporter metrics and export to X.
While the examples above involve OTEL collectors, there are numerous other systems available.
Vendor extensions
The OTEL libraries allow for the registration of Providers/Handlers. While we will offer the default ones (PULL via Prometheus, PUSH via OTEL HTTP metrics exporter) mentioned in Envoy Gateway’s extensibility, we can easily allow custom builds of Envoy Gateway to plug in alternatives if the default options don’t meet their needs.
For instance, users may prefer to write metrics over the OTLP gRPC metrics exporter instead of the HTTP metrics exporter. This is perfectly acceptable – and almost impossible to prevent. The OTEL has ways to register their providers/exporters, and Envoy Gateway can ensure its usage is such that it’s not overly difficult to swap out a different provider/exporter.
Stability
Observability is, in essence, a user-facing API. Its primary purpose is to be consumed - by both humans and tooling. Therefore, having well-defined guarantees around their formats is crucial.
Please note that this refers only to the contents of the telemetry - what we emit, the names of things, semantics, etc. Other settings like Prometheus vs OTLP, JSON vs plaintext, logging levels, etc., are not considered.
I propose the following:
Metrics
Metrics offer the greatest potential for providing guarantees. They often directly influence alerts and dashboards, making changes highly impactful. This contrasts with traces and logs, which are often used for ad-hoc analysis, where minor changes to information can be easily understood by a human.
Moreover, there is precedent for this: Kubernetes Metrics Lifecycle has well-defined processes, and Envoy Gateway’s dataplane (Envoy Proxy) metrics are de facto stable.
Currently, all Envoy Gateway metrics lack defined stability. I suggest we categorize all existing metrics as either:
- Deprecated: a metric that is intended to be phased out.
- Experimental: a metric that is off by default.
- Alpha: a metric that is on by default.
We should aim to promote a core set of metrics to Stable within a few releases.
Envoy Gateway API Types
New APIs will be added to Envoy Gateway config, which are used to manage Control Plane Telemetry bootstrap configs.
EnvoyGatewayTelemetry
// EnvoyGatewayTelemetry defines telemetry configurations for envoy gateway control plane.
// Control plane will focus on metrics observability telemetry and tracing telemetry later.
type EnvoyGatewayTelemetry struct {
// Metrics defines metrics configuration for envoy gateway.
Metrics *EnvoyGatewayMetrics `json:"metrics,omitempty"`
}
EnvoyGatewayMetrics
Prometheus will be exposed on 0.0.0.0:19001, which is not supported to be configured yet.
// EnvoyGatewayMetrics defines control plane push/pull metrics configurations.
type EnvoyGatewayMetrics struct {
// Sinks defines the metric sinks where metrics are sent to.
Sinks []EnvoyGatewayMetricSink `json:"sinks,omitempty"`
// Prometheus defines the configuration for prometheus endpoint.
Prometheus *EnvoyGatewayPrometheusProvider `json:"prometheus,omitempty"`
}
// EnvoyGatewayMetricSink defines control plane
// metric sinks where metrics are sent to.
type EnvoyGatewayMetricSink struct {
// Type defines the metric sink type.
// EG control plane currently supports OpenTelemetry.
// +kubebuilder:validation:Enum=OpenTelemetry
// +kubebuilder:default=OpenTelemetry
Type MetricSinkType `json:"type"`
// OpenTelemetry defines the configuration for OpenTelemetry sink.
// It's required if the sink type is OpenTelemetry.
OpenTelemetry *EnvoyGatewayOpenTelemetrySink `json:"openTelemetry,omitempty"`
}
type EnvoyGatewayOpenTelemetrySink struct {
// Host define the sink service hostname.
Host string `json:"host"`
// Protocol define the sink service protocol.
// +kubebuilder:validation:Enum=grpc;http
Protocol string `json:"protocol"`
// Port defines the port the sink service is exposed on.
//
// +optional
// +kubebuilder:validation:Minimum=0
// +kubebuilder:default=4317
Port int32 `json:"port,omitempty"`
}
// EnvoyGatewayPrometheusProvider will expose prometheus endpoint in pull mode.
type EnvoyGatewayPrometheusProvider struct {
// Disable defines if disables the prometheus metrics in pull mode.
//
Disable bool `json:"disable,omitempty"`
}
Example
- The following is an example to disable prometheus metric.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
logging:
level: null
default: info
provider:
type: Kubernetes
telemetry:
metrics:
prometheus:
disable: true
- The following is an example to send metric via Open Telemetry sink to OTEL gRPC Collector.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
logging:
level: null
default: info
provider:
type: Kubernetes
telemetry:
metrics:
sinks:
- type: OpenTelemetry
openTelemetry:
host: otel-collector.monitoring.svc.cluster.local
port: 4317
protocol: grpc
- The following is an example to disable prometheus metric and send metric via Open Telemetry sink to OTEL HTTP Collector at the same time.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
logging:
level: null
default: info
provider:
type: Kubernetes
telemetry:
metrics:
prometheus:
disable: false
sinks:
- type: OpenTelemetry
openTelemetry:
host: otel-collector.monitoring.svc.cluster.local
port: 4318
protocol: http
1.6 - BackendTrafficPolicy
Overview
This design document introduces the BackendTrafficPolicy
API allowing users to configure
the behavior for how the Envoy Proxy server communicates with upstream backend services/endpoints.
Goals
- Add an API definition to hold settings for configuring behavior of the connection between the backend services
and Envoy Proxy listener.
Non Goals
- Define the API configuration fields in this API.
Implementation
BackendTrafficPolicy
is an implied hierarchy type API that can be used to extend Gateway API.
It can target either a Gateway
, or an xRoute (HTTPRoute
/GRPCRoute
/etc.). When targeting a Gateway
,
it will apply the configured settings within ght BackendTrafficPolicy
to all children xRoute resources of that Gateway
.
If a BackendTrafficPolicy
targets an xRoute and a different BackendTrafficPolicy
targets the Gateway
that route belongs to,
then the configuration from the policy that is targeting the xRoute resource will win in a conflict.
Example
Here is an example highlighting how a user can configure this API.
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ipv4-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.foo.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: ipv4-service
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: ipv6-route
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.bar.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: ipv6-service
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: default-ipv-policy
namespace: default
spec:
protocols:
enableIPv6: false
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ipv6-support-policy
namespace: default
spec:
protocols:
enableIPv6: true
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: ipv6-route
namespace: default
Features / API Fields
Here is a list of some features that can be included in this API. Note that this list is not exhaustive.
- Protocol configuration
- Circuit breaking
- Retries
- Keep alive probes
- Health checking
- Load balancing
- Rate limit
Design Decisions
- This API will only support a single
targetRef
and can bind to only a Gateway
or xRoute (HTTPRoute
/GRPCRoute
/etc.) resource. - This API resource MUST be part of same namespace as the resource it targets.
- There can be only be ONE policy resource attached to a specific
Listener
(section) within a Gateway
- If the policy targets a resource but cannot attach to it, this information should be reflected
in the Policy Status field using the
Conflicted=True
condition. - If multiple polices target the same resource, the oldest resource (based on creation timestamp) will
attach to the Gateway Listeners, the others will not.
- If Policy A has a
targetRef
that includes a sectionName
i.e.
it targets a specific Listener within a Gateway
and Policy B has a targetRef
that targets the same
entire Gateway then- Policy A will be applied/attached to the specific Listener defined in the
targetRef.SectionName
- Policy B will be applied to the remaining Listeners within the Gateway. Policy B will have an additional
status condition
Overridden=True
.
Alternatives
- The project can indefinitely wait for these configuration parameters to be part of the Gateway API.
1.7 - Bootstrap Design
Overview
Issue 31 specifies the need for allowing advanced users to specify their custom
Envoy Bootstrap configuration rather than using the default Bootstrap configuration
defined in Envoy Gateway. This allows advanced users to extend Envoy Gateway and
support their custom use cases such setting up tracing and stats configuration
that is not supported by Envoy Gateway.
Goals
- Define an API field to allow a user to specify a custom Bootstrap
- Provide tooling to allow the user to generate the default Bootstrap configuration
as well as validate their custom Bootstrap.
Non Goals
- Allow user to configure only a section of the Bootstrap
API
Leverage the existing EnvoyProxy resource which can be attached to the GatewayClass using
the parametersRef field, and define a Bootstrap
field within the resource. If this field is set,
the value is used as the Bootstrap configuration for all managed Envoy Proxies created by Envoy Gateway.
// EnvoyProxySpec defines the desired state of EnvoyProxy.
type EnvoyProxySpec struct {
......
// Bootstrap defines the Envoy Bootstrap as a YAML string.
// Visit https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-msg-config-bootstrap-v3-bootstrap
// to learn more about the syntax.
// If set, this is the Bootstrap configuration used for the managed Envoy Proxy fleet instead of the default Bootstrap configuration
// set by Envoy Gateway.
// Some fields within the Bootstrap that are required to communicate with the xDS Server (Envoy Gateway) and receive xDS resources
// from it are not configurable and will result in the `EnvoyProxy` resource being rejected.
// Backward compatibility across minor versions is not guaranteed.
// We strongly recommend using `egctl x translate` to generate a `EnvoyProxy` resource with the `Bootstrap` field set to the default
// Bootstrap configuration used. You can edit this configuration, and rerun `egctl x translate` to ensure there are no validation errors.
//
// +optional
Bootstrap *string `json:"bootstrap,omitempty"`
}
A CLI tool egctl x translate
will be provided to the user to help generate a working Bootstrap configuration.
Here is an example where a user inputs a GatewayClass
and the CLI generates the EnvoyProxy
resource with the Bootstrap
field populated.
cat <<EOF | egctl x translate --from gateway-api --to gateway-api -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
name: with-bootstrap-config
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: with-bootstrap-config
spec:
bootstrap: |
admin:
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 19000
dynamic_resources:
cds_config:
resource_api_version: V3
api_config_source:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
lds_config:
resource_api_version: V3
api_config_source:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
static_resources:
clusters:
- connect_timeout: 1s
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: envoy-gateway
port_value: 18000
typed_extension_protocol_options:
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
"explicit_http_config":
"http2_protocol_options": {}
name: xds_cluster
type: STRICT_DNS
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
tls_params:
tls_maximum_protocol_version: TLSv1_3
tls_certificate_sds_secret_configs:
- name: xds_certificate
sds_config:
path_config_source:
path: "/sds/xds-certificate.json"
resource_api_version: V3
validation_context_sds_secret_config:
name: xds_trusted_ca
sds_config:
path_config_source:
path: "/sds/xds-trusted-ca.json"
resource_api_version: V3
layered_runtime:
layers:
- name: runtime-0
rtds_layer:
rtds_config:
resource_api_version: V3
api_config_source:
transport_api_version: V3
api_type: DELTA_GRPC
grpc_services:
envoy_grpc:
cluster_name: xds_cluster
name: runtime-0
The user can now modify the output, for their use case. Lets say for this example, the user wants to change the admin server port
from 19000
to 18000
, they can do so by editing the previous output and running egctl x translate
again to see if there any validation
errors. Validation errors should be surfaced in the Status subresource. The internal validator will ensure that the Bootstrap string can be
unmarshalled into the Bootstrap object as well as ensure the user can override certain fields within the Bootstrap configuration such as the
address
and tls context within the xds_cluster
which are essential for xDS communication between Envoy Gateway and Envoy Proxy.
cat <<EOF | egctl x translate --from gateway-api --to gateway-api -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
name: with-bootstrap-config
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: with-bootstrap-config
spec:
bootstrap: |
admin:
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 18000
dynamic_resources:
cds_config:
resource_api_version: V3
api_config_source:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
lds_config:
resource_api_version: V3
api_config_source:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
static_resources:
clusters:
- connect_timeout: 1s
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: envoy-gateway
port_value: 18000
typed_extension_protocol_options:
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
"explicit_http_config":
"http2_protocol_options": {}
name: xds_cluster
type: STRICT_DNS
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
tls_params:
tls_maximum_protocol_version: TLSv1_3
tls_certificate_sds_secret_configs:
- name: xds_certificate
sds_config:
path_config_source:
path: "/sds/xds-certificate.json"
resource_api_version: V3
validation_context_sds_secret_config:
name: xds_trusted_ca
sds_config:
path_config_source:
path: "/sds/xds-trusted-ca.json"
resource_api_version: V3
layered_runtime:
layers:
- name: runtime-0
rtds_layer:
rtds_config:
resource_api_version: V3
api_config_source:
transport_api_version: V3
api_type: DELTA_GRPC
grpc_services:
envoy_grpc:
cluster_name: xds_cluster
name: runtime-0
EOF
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
name: with-bootstrap-config
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: with-bootstrap-config
spec:
bootstrap: |
admin:
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 18000
dynamic_resources:
cds_config:
resource_api_version: V3
api_config_source:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
lds_config:
resource_api_version: V3
api_config_source:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
static_resources:
clusters:
- connect_timeout: 1s
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: envoy-gateway
port_value: 18000
typed_extension_protocol_options:
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
"explicit_http_config":
"http2_protocol_options": {}
name: xds_cluster
type: STRICT_DNS
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
tls_params:
tls_maximum_protocol_version: TLSv1_3
tls_certificate_sds_secret_configs:
- name: xds_certificate
sds_config:
path_config_source:
path: "/sds/xds-certificate.json"
resource_api_version: V3
validation_context_sds_secret_config:
name: xds_trusted_ca
sds_config:
path_config_source:
path: "/sds/xds-trusted-ca.json"
resource_api_version: V3
layered_runtime:
layers:
- name: runtime-0
rtds_layer:
rtds_config:
resource_api_version: V3
api_config_source:
transport_api_version: V3
api_type: DELTA_GRPC
grpc_services:
envoy_grpc:
cluster_name: xds_cluster
name: runtime-0
1.8 - ClientTrafficPolicy
Overview
This design document introduces the ClientTrafficPolicy
API allowing system administrators to configure
the behavior for how the Envoy Proxy server behaves with downstream clients.
Goals
- Add an API definition to hold settings for configuring behavior of the connection between the downstream
client and Envoy Proxy listener.
Non Goals
- Define the API configuration fields in this API.
Implementation
ClientTrafficPolicy
is a Direct Policy Attachment type API that can be used to extend Gateway API
to define configuration that affect the connection between the downstream client and Envoy Proxy listener.
Example
Here is an example highlighting how a user can configure this API.
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: ClientTrafficPolicy
metadata:
name: enable-proxy-protocol-policy
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
enableProxyProtocol: true
Features / API Fields
Here is a list of features that can be included in this API
- Downstream ProxyProtocol
- Downstream Keep Alives
- IP Blocking
- Downstream HTTP3
Design Decisions
- This API will only support a single
targetRef
and can bind to only a Gateway
resource. - This API resource MUST be part of same namespace as the
Gateway
resource - There can be only be ONE policy resource attached to a specific
Listener
(section) within a Gateway
- If the policy targets a resource but cannot attach to it, this information should be reflected
in the Policy Status field using the
Conflicted=True
condition. - If multiple polices target the same resource, the oldest resource (based on creation timestamp) will
attach to the Gateway Listeners, the others will not.
- If Policy A has a
targetRef
that includes a sectionName
i.e.
it targets a specific Listener within a Gateway
and Policy B has a targetRef
that targets the same
entire Gateway then- Policy A will be applied/attached to the specific Listener defined in the
targetRef.SectionName
- Policy B will be applied to the remaining Listeners within the Gateway. Policy B will have an additional
status condition
Overridden=True
.
Alternatives
- The project can indefinitely wait for these configuration parameters to be part of the Gateway API.
1.9 - Configuration API Design
Motivation
Issue 51 specifies the need to design an API for configuring Envoy Gateway. The control plane is configured
statically at startup and the data plane is configured dynamically through Kubernetes resources, primarily
Gateway API objects. Refer to the Envoy Gateway design doc for additional details regarding
Envoy Gateway terminology and configuration.
Goals
- Define an initial API to configure Envoy Gateway at startup.
- Define an initial API for configuring the managed data plane, e.g. Envoy proxies.
Non-Goals
- Implementation of the configuration APIs.
- Define the
status
subresource of the configuration APIs. - Define a complete set of APIs for configuring Envoy Gateway. As stated in the Goals, this document
defines the initial configuration APIs.
- Define an API for deploying/provisioning/operating Envoy Gateway. If needed, a future Envoy Gateway operator would be
responsible for designing and implementing this type of API.
- Specify tooling for managing the API, e.g. generate protos, CRDs, controller RBAC, etc.
Control Plane API
The EnvoyGateway
API defines the control plane configuration, e.g. Envoy Gateway. Key points of this API are:
- It will define Envoy Gateway’s startup configuration file. If the file does not exist, Envoy Gateway will start up
with default configuration parameters.
- EnvoyGateway inlines the
TypeMeta
API. This allows EnvoyGateway to be versioned and managed as a GroupVersionKind
scheme. - EnvoyGateway does not contain a metadata field since it’s currently represented as a static configuration file instead of
a Kubernetes resource.
- Since EnvoyGateway does not surface status, EnvoyGatewaySpec is inlined.
- If data plane static configuration is required in the future, Envoy Gateway will use a separate file for this purpose.
The v1alpha1
version and gateway.envoyproxy.io
API group get generated:
// gateway/api/config/v1alpha1/doc.go
// Package v1alpha1 contains API Schema definitions for the gateway.envoyproxy.io API group.
//
// +groupName=gateway.envoyproxy.io
package v1alpha1
The initial EnvoyGateway
API:
// gateway/api/config/v1alpha1/envoygateway.go
package valpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EnvoyGateway is the Schema for the envoygateways API
type EnvoyGateway struct {
metav1.TypeMeta `json:",inline"`
// EnvoyGatewaySpec defines the desired state of Envoy Gateway.
EnvoyGatewaySpec `json:",inline"`
}
// EnvoyGatewaySpec defines the desired state of Envoy Gateway configuration.
type EnvoyGatewaySpec struct {
// Gateway defines Gateway-API specific configuration. If unset, default
// configuration parameters will apply.
//
// +optional
Gateway *Gateway `json:"gateway,omitempty"`
// Provider defines the desired provider configuration. If unspecified,
// the Kubernetes provider is used with default parameters.
//
// +optional
Provider *EnvoyGatewayProvider `json:"provider,omitempty"`
}
// Gateway defines desired Gateway API configuration of Envoy Gateway.
type Gateway struct {
// ControllerName defines the name of the Gateway API controller. If unspecified,
// defaults to "gateway.envoyproxy.io/gatewayclass-controller". See the following
// for additional details:
//
// https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.GatewayClass
//
// +optional
ControllerName string `json:"controllerName,omitempty"`
}
// EnvoyGatewayProvider defines the desired configuration of a provider.
// +union
type EnvoyGatewayProvider struct {
// Type is the type of provider to use. If unset, the Kubernetes provider is used.
//
// +unionDiscriminator
Type ProviderType `json:"type,omitempty"`
// Kubernetes defines the configuration of the Kubernetes provider. Kubernetes
// provides runtime configuration via the Kubernetes API.
//
// +optional
Kubernetes *EnvoyGatewayKubernetesProvider `json:"kubernetes,omitempty"`
// File defines the configuration of the File provider. File provides runtime
// configuration defined by one or more files.
//
// +optional
File *EnvoyGatewayFileProvider `json:"file,omitempty"`
}
// ProviderType defines the types of providers supported by Envoy Gateway.
type ProviderType string
const (
// KubernetesProviderType defines the "Kubernetes" provider.
KubernetesProviderType ProviderType = "Kubernetes"
// FileProviderType defines the "File" provider.
FileProviderType ProviderType = "File"
)
// EnvoyGatewayKubernetesProvider defines configuration for the Kubernetes provider.
type EnvoyGatewayKubernetesProvider struct {
// TODO: Add config as use cases are better understood.
}
// EnvoyGatewayFileProvider defines configuration for the File provider.
type EnvoyGatewayFileProvider struct {
// TODO: Add config as use cases are better understood.
}
Note: Provider-specific configuration is defined in the {$PROVIDER_NAME}Provider
API.
Gateway
Gateway defines desired configuration of Gateway API controllers that reconcile and translate Gateway API
resources into the Intermediate Representation (IR). Refer to the Envoy Gateway design doc for additional
details.
Provider
Provider defines the desired configuration of an Envoy Gateway provider. A provider is an infrastructure component that
Envoy Gateway calls to establish its runtime configuration. Provider is a union type. Therefore, Envoy Gateway
can be configured with only one provider based on the type
discriminator field. Refer to the Envoy Gateway
design doc for additional details.
Control Plane Configuration
The configuration file is defined by the EnvoyGateway API type. At startup, Envoy Gateway searches for the configuration
at “/etc/envoy-gateway/config.yaml”.
Start Envoy Gateway:
Since the configuration file does not exist, Envoy Gateway will start with default configuration parameters.
The Kubernetes provider can be configured explicitly using provider.kubernetes
:
$ cat << EOF > /etc/envoy-gateway/config.yaml
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
kubernetes: {}
EOF
This configuration will cause Envoy Gateway to use the Kubernetes provider with default configuration parameters.
The Kubernetes provider can be configured using the provider
field. For example, the foo
field can be set to “bar”:
$ cat << EOF > /etc/envoy-gateway/config.yaml
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
kubernetes:
foo: bar
EOF
Note: The Provider API from the Kubernetes package is currently undefined and foo: bar
is provided for
illustration purposes only.
The same API structure is followed for each supported provider. The following example causes Envoy Gateway to use the
File provider:
$ cat << EOF > /etc/envoy-gateway/config.yaml
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: File
file:
foo: bar
EOF
Note: The Provider API from the File package is currently undefined and foo: bar
is provided for illustration
purposes only.
Gateway API-related configuration is expressed through the gateway
field. If unspecified, Envoy Gateway will use
default configuration parameters for gateway
. The following example causes the GatewayClass controller to
manage GatewayClasses with controllerName foo
instead of the default gateway.envoyproxy.io/gatewayclass-controller
:
$ cat << EOF > /etc/envoy-gateway/config.yaml
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
gateway:
controllerName: foo
EOF
With any of the above configuration examples, Envoy Gateway can be started without any additional arguments:
Data Plane API
The data plane is configured dynamically through Kubernetes resources, primarily Gateway API objects.
Optionally, the data plane infrastructure can be configured by referencing a custom resource (CR) through
spec.parametersRef
of the managed GatewayClass. The EnvoyProxy
API defines the data plane infrastructure
configuration and is represented as the CR referenced by the managed GatewayClass. Key points of this API are:
- If unreferenced by
gatewayclass.spec.parametersRef
, default parameters will be used to configure the data plane
infrastructure, e.g. expose Envoy network endpoints using a LoadBalancer service. - Envoy Gateway will follow Gateway API recommendations regarding updates to the EnvoyProxy CR:
It is recommended that this resource be used as a template for Gateways. This means that a Gateway is based on the
state of the GatewayClass at the time it was created and changes to the GatewayClass or associated parameters are
not propagated down to existing Gateways.
The initial EnvoyProxy
API:
// gateway/api/config/v1alpha1/envoyproxy.go
package v1alpha1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
// EnvoyProxy is the Schema for the envoyproxies API.
type EnvoyProxy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec EnvoyProxySpec `json:"spec,omitempty"`
Status EnvoyProxyStatus `json:"status,omitempty"`
}
// EnvoyProxySpec defines the desired state of Envoy Proxy infrastructure
// configuration.
type EnvoyProxySpec struct {
// Undefined by this design spec.
}
// EnvoyProxyStatus defines the observed state of EnvoyProxy.
type EnvoyProxyStatus struct {
// Undefined by this design spec.
}
The EnvoyProxySpec and EnvoyProxyStatus fields will be defined in the future as proxy infrastructure configuration use
cases are better understood.
Data Plane Configuration
GatewayClass and Gateway resources define the data plane infrastructure. Note that all examples assume Envoy Gateway is
running with the Kubernetes provider.
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: example-class
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: example-class
listeners:
- name: http
protocol: HTTP
port: 80
Since the GatewayClass does not define spec.parametersRef
, the data plane is provisioned using default configuration
parameters. The Envoy proxies will be configured with a http listener and a Kubernetes LoadBalancer service listening
on port 80.
The following example will configure the data plane to use a ClusterIP service instead of the default LoadBalancer
service:
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: example-class
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
name: example-config
group: gateway.envoyproxy.io
kind: EnvoyProxy
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: example-class
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: example-config
spec:
networkPublishing:
type: ClusterIPService
Note: The NetworkPublishing API is currently undefined and is provided here for illustration purposes only.
1.10 - Debug support in Envoy Gateway
Overview
Envoy Gateway exposes endpoints at localhost:19000/debug/pprof
to run Golang profiles to aid in live debugging.
The endpoints are equivalent to those found in the http/pprof package. /debug/pprof/
returns an HTML page listing the available profiles.
Goals
- Add admin server to Envoy Gateway control plane, separated with admin server.
- Add pprof support to Envoy Gateway control plane.
- Define an API to allow Envoy Gateway to custom admin server configuration.
- Define an API to allow Envoy Gateway to open envoy gateway config dump in logs.
The following are the different types of profiles end-user can run:
PROFILE | FUNCTION |
---|
/debug/pprof/allocs | Returns a sampling of all past memory allocations. |
/debug/pprof/block | Returns stack traces of goroutines that led to blocking on synchronization primitives. |
/debug/pprof/cmdline | Returns the command line that was invoked by the current program. |
/debug/pprof/goroutine | Returns stack traces of all current goroutines. |
/debug/pprof/heap | Returns a sampling of memory allocations of live objects. |
/debug/pprof/mutex | Returns stack traces of goroutines holding contended mutexes. |
/debug/pprof/profile | Returns pprof-formatted cpu profile. You can specify the duration using the seconds GET parameter. The default duration is 30 seconds. |
/debug/pprof/symbol | Returns the program counters listed in the request. |
/debug/pprof/threadcreate | Returns stack traces that led to creation of new OS threads. |
/debug/pprof/trace | Returns the execution trace in binary form. You can specify the duration using the seconds GET parameter. The default duration is 1 second. |
Non Goals
API
- Add
admin
field in EnvoyGateway config. - Add
address
field under admin
field. - Add
port
and host
under address
field. - Add
enableDumpConfig
field under `admin field. - Add
enablePprof
field under `admin field.
Here is an example configuration to open admin server and enable Pprof:
apiVersion: gateway.envoyproxy.io/v1alpha1
gateway:
controllerName: "gateway.envoyproxy.io/gatewayclass-controller"
kind: EnvoyGateway
provider:
type: "Kubernetes"
admin:
enablePprof: true
address:
host: 127.0.0.1
port: 19000
Here is an example configuration to open envoy gateway config dump in logs:
apiVersion: gateway.envoyproxy.io/v1alpha1
gateway:
controllerName: "gateway.envoyproxy.io/gatewayclass-controller"
kind: EnvoyGateway
provider:
type: "Kubernetes"
admin:
enableDumpConfig: true
1.11 - egctl Design
Motivation
EG should provide a command line tool with following capabilities:
- Collect configuration from envoy proxy and gateway
- Analyse system configuration to diagnose any issues in envoy gateway
This tool is named egctl
.
Syntax
Use the following syntax to run egctl
commands from your terminal window:
egctl [command] [entity] [name] [flags]
where command
, name
, and flags
are:
command
: Specifies the operation that you want to perform on one or more resources,
for example config
, version
.
entity
: Specifies the entity the operation is being performed on such as envoy-proxy
or envoy-gateway
.
name
: Specifies the name of the specified instance.
flags
: Specifies optional flags. For example, you can use the -c
or --config
flags to specify the values for installing.
If you need help, run egctl help
from the terminal window.
Operation
The following table includes short descriptions and the general syntax for all the egctl
operations:
Operation | Syntax | Description |
---|
version | egctl version | Prints out build version information. |
config | egctl config ENTITY | Retrieve information about proxy configuration from envoy proxy and gateway |
analyze | egctl analyze | Analyze EG configuration and print validation messages |
experimental | egctl experimental | Subcommand for experimental features. These do not guarantee backwards compatibility |
Examples
Use the following set of examples to help you familiarize yourself with running the commonly used egctl
operations:
# Retrieve all information about proxy configuration from envoy
egctl config envoy-proxy all <instance_name>
# Retrieve listener information about proxy configuration from envoy
egctl config envoy-proxy listener <instance_name>
# Retrieve information about envoy gateway
egctl config envoy-gateway
1.12 - Envoy Gateway Extensions Design
As outlined in the official goals for the Envoy Gateway project, one of the main goals is to “provide a common foundation for vendors to build value-added products
without having to re-engineer fundamental interactions”. Development of the Envoy Gateway project has been focused on developing the core features for the project and
Kubernetes Gateway API conformance. This system focuses on the “common foundation for vendors” component by introducing a way for vendors to extend Envoy Gateway.
To meaningfully extend Envoy Gateway and provide additional features, Extensions need to be able to introduce their own custom resources and have a high level of control
over the configuration generated by Envoy Gateway. Simply applying some static xDS configuration patches or relying on the existing Gateway API resources are both insufficient on their own
as means to add larger features that require dynamic user-configuration.
As an example, an extension developer may wish to provide their own out-of-the-box authentication filters that require configuration from the end-user. This is a scenario where the ability to introduce
custom resources and attach them to HTTPRoutes as an ExtensionRef is necessary. Providing the same feature through a series of xDS patch resources would be too cumbersome for many end-users that want to avoid
that level of complexity when managing their clusters.
Goals
- Provide a foundation for extending the Envoy Gateway control plane
- Allow Extension Developers to introduce their own custom resources for extending the Gateway-API via ExtensionRefs, policyAttachments (future) and backendRefs (future).
- Extension developers should NOT have to maintain a custom fork of Envoy Gateway
- Provide a system for extending Envoy Gateway which allows extension projects to ship updates independent of Envoy Gateway’s release schedule
- Modify the generated Envoy xDS config
- Setup a foundation for the initial iteration of Extending Envoy Gateway
- Allow an Extension to hook into the infra manager pipeline (future)
Non-Goals
- The initial design does not capture every hook that Envoy Gateway will eventually support.
- Extend Gateway API Policy Attachments. At some point, these will be addressed using this extension system, but the initial implementation omits these.
- Support multiple extensions at the same time. Due to the fact that extensions will be modifying xDS resources after they are generated, handling the order of extension execution for each individual hook point is a challenge. Additionally, there is no
real way to prevent one extension from overwriting or breaking modifications to xDS resources that were made by another extension that was executed first.
Overview
Envoy Gateway can be extended by vendors by means of an extension server developed by the vendor and deployed alongside Envoy Gateway.
An extension server can make use of one or more pre/post hooks inside Envoy Gateway before and after its major components (translator, etc.) to allow the extension to modify the data going into or coming out of these components.
An extension can be created external to Envoy Gateway as its own Kubernetes deployment or loaded as a sidecar. gRPC is used for the calls between Envoy Gateway and an extension. In the hook call, Envoy Gateway sends data as well
as context information to the extension and expects a reply with a modified version of the data that was sent to the extension. Since extensions fundamentally alter the logic and data that Envoy Gateway provides, Extension projects assume responsibility for any bugs and issues
they create as a direct result of their modification of Envoy Gateway.
Diagram
Registering Extensions in Envoy Gateway
Information about the extension that Envoy Gateway needs to load is configured in the Envoy Gateway config.
An example configuration:
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
extensionManager:
resources:
- group: example.myextension.io
version: v2
kind: OAuth2Filter
hooks:
xdsTranslator:
post:
- Route
- VirtualHost
- HTTPListener
- Translation
service:
host: my-extension.example
port: 443
tls:
certificateRef:
name: my-secret
namespace: default
An extension must supply connection information in the extension.service
field so that Envoy Gateway can communicate with the extension. The tls
configuration is optional.
If the extension wants Envoy Gateway to watch resources for it then the extension must configure the optional extension.resources
field and supply a list of:
group
: the API group of the resourceversion
: the API version of the resourcekind
: the Kind of resource
The extension can configure the extensionManager.hooks
field to specify which hook points it would like to support. If a given hook is not listed here then it will not be executed even
if the extension is configured properly. This allows extension developers to only opt-in to the hook points they want to make use of.
This configuration is required to be provided at bootstrap and modifying the registered extension during runtime is not currently supported.
Envoy Gateway will keep track of the registered extension and its API groups
and kinds
when processing Gateway API resources.
Extending Gateway API and the Data Plane
Envoy Gateway manages Envoy deployments, which act as the data plane that handles actual user traffic. Users configure the data plane using the K8s Gateway API resources which Envoy
Gateway converts into Envoy specific configuration (xDS) to send over to Envoy.
Gateway API offers ExtensionRef filters and Policy Attachments as extension points for implementers to use. Envoy Gateway extends the Gateway API using these extension points to provide support for rate limiting
and authentication native to the project. The initial design of Envoy Gateway extensions will primarily focus on ExtensionRef filters so that extension developers can reference their own resources as HTTP Filters in the same way
that Envoy Gateway has native support for rate limiting and authentication filters.
When Envoy Gateway encounters an HTTPRoute or GRPCRoute that has an ExtensionRef
filter
with a group
and kind
that Envoy Gateway does not support, it will first
check the registered extension to determine if it supports the referenced object before considering it a configuration error.
This allows users to be able to reference additional filters provided by their Envoy Gateway Extension, in their HTTPRoute
s / GRPCRoute
s:
apiVersion: example.myextension.io/v1alpha1
kind: OAuth2Filter
metadata:
name: oauth2-filter
spec:
...
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- www.example.com
rules:
- clientSelectors:
- path:
type: PathPrefix
value: /
filters:
- type: ExtensionRef
extensionRef:
group: example.myextension.io
kind: OAuth2Filter
name: oauth2-filter
backendRefs:
- name: backend
port: 3000
In order to enable the usage of new resources introduced by an extension for translation and xDS modification, Envoy Gateway provides hook points within the translation pipeline, where it calls out to the extension service registered in the EnvoyGateway config
if they specify an group
that matches the group
of an ExtensionRef
filter. The extension will then be able to modify the xDS that Envoy Gateway generated and send back the
modified configuration. If an extension is not registered or if the registered extension does not specify support for the group
of an ExtensionRef
filter then Envoy Gateway will treat it as an unknown resource
and provide an error to the user.
Note: Currently (as of v1) Gateway API does not provide a means to specify the namespace or version of an object referenced as an ExtensionRef
. The extension mechanism will assume that
the namespace of any ExtensionRef
is the same as the namespace of the HTTPRoute
or GRPCRoute
it is attached to rather than treating the name
field of an ExtensionRef
as a name.namespace
string.
If Gateway API adds support for these fields then the design of the Envoy Gateway extensions will be updated to support them.
Watching New Resources
Envoy Gateway will dynamically create new watches on resources introduced by the registered Extension. It does so by using the controller-runtime to create new watches on Unstructured resources that match the version
s, group
s, and kind
s that the
registered extension configured. When communicating with an extension, Envoy Gateway sends these Unstructured resources over to the extension. This eliminates the need for the extension to create its own watches which would have a strong chance of creating race conditions and reconciliation loops when resources change. When an extension receives the Unstructured resources from Envoy Gateway it can perform its own type validation on them. Currently we make the simplifying assumption that the registered extension’s Kinds
are filters referenced by extensionRef
in HTTPRouteFilter
s . Support for Policy attachments will be introduced at a later time.
xDS Hooks API
Envoy Gateway supports the following hooks as the initial foundation of the Extension system. Additional hooks can be developed using this extension system at a later point as new use-cases and needs are discovered. The primary iteration of the extension hooks
focuses solely on the modification of xDS resources.
Route Modification Hook
The Route level Hook provides a way for extensions to modify a route generated by Envoy Gateway before it is finalized.
Doing so allows extensions to configure/modify route fields configured by Envoy Gateway and also to configure the
Route’s TypedPerFilterConfig which may be desirable to do things such as pass settings and information to ext_authz filters.
The Post Route Modify hook also passes a list of Unstructured data for the externalRefs owned by the extension on the HTTPRoute that created this xDS route
This hook is always executed when an extension is loaded that has added Route
to the EnvoyProxy.extensionManager.hooks.xdsTranslator.post
, and only on Routes which were generated from an HTTPRoute that uses extension resources as externalRef filters.
// PostRouteModifyRequest sends a Route that was generated by Envoy Gateway along with context information to an extension so that the Route can be modified
message PostRouteModifyRequest {
envoy.config.route.v3.Route route = 1;
PostRouteExtensionContext post_route_context = 2;
}
// RouteExtensionContext provides resources introduced by an extension and watched by Envoy Gateway
// additional context information can be added to this message as more use-cases are discovered
message PostRouteExtensionContext {
// Resources introduced by the extension that were used as extensionRefs in an HTTPRoute/GRPCRoute
repeated ExtensionResource extension_resources = 1;
// hostnames are the fully qualified domain names attached to the HTTPRoute
repeated string hostnames = 2;
}
// ExtensionResource stores the data for a K8s API object referenced in an HTTPRouteFilter
// extensionRef. It is constructed from an unstructured.Unstructured marshalled to JSON. An extension
// can marshal the bytes from this resource back into an unstructured.Unstructured and then
// perform type checking to obtain the resource.
message ExtensionResource {
bytes unstructured_bytes = 1;
}
// PostRouteModifyResponse is the expected response from an extension and contains a modified version of the Route that was sent
// If an extension returns a nil Route then it will not be modified
message PostRouteModifyResponse {
envoy.config.route.v3.Route route = 1;
}
VirtualHost Modification Hook
The VirtualHost Hook provides a way for extensions to modify a VirtualHost generated by Envoy Gateway before it is finalized.
An extension can also make use of this hook to generate and insert entirely new Routes not generated by Envoy Gateway.
This hook is always executed when an extension is loaded that has added VirtualHost
to the EnvoyProxy.extensionManager.hooks.xdsTranslator.post
.
An extension may return nil to not make any changes to the VirtualHost.
// PostVirtualHostModifyRequest sends a VirtualHost that was generated by Envoy Gateway along with context information to an extension so that the VirtualHost can be modified
message PostVirtualHostModifyRequest {
envoy.config.route.v3.VirtualHost virtual_host = 1;
PostVirtualHostExtensionContext post_virtual_host_context = 2;
}
// Empty for now but we can add fields to the context as use-cases are discovered without
// breaking any clients that use the API
// additional context information can be added to this message as more use-cases are discovered
message PostVirtualHostExtensionContext {}
// PostVirtualHostModifyResponse is the expected response from an extension and contains a modified version of the VirtualHost that was sent
// If an extension returns a nil Virtual Host then it will not be modified
message PostVirtualHostModifyResponse {
envoy.config.route.v3.VirtualHost virtual_host = 1;
}
HTTP Listener Modification Hook
The HTTP Listener modification hook is the broadest xDS modification Hook available and allows an extension to make changes to a Listener generated by Envoy Gateway before it is finalized.
This hook is always executed when an extension is loaded that has added HTTPListener
to the EnvoyProxy.extensionManager.hooks.xdsTranslator.post
. An extension may return nil
in order to not make any changes to the Listener.
// PostVirtualHostModifyRequest sends a Listener that was generated by Envoy Gateway along with context information to an extension so that the Listener can be modified
message PostHTTPListenerModifyRequest {
envoy.config.listener.v3.Listener listener = 1;
PostHTTPListenerExtensionContext post_listener_context = 2;
}
// Empty for now but we can add fields to the context as use-cases are discovered without
// breaking any clients that use the API
// additional context information can be added to this message as more use-cases are discovered
message PostHTTPListenerExtensionContext {}
// PostHTTPListenerModifyResponse is the expected response from an extension and contains a modified version of the Listener that was sent
// If an extension returns a nil Listener then it will not be modified
message PostHTTPListenerModifyResponse {
envoy.config.listener.v3.Listener listener = 1;
}
Post xDS Translation Modify Hook
The Post Translate Modify hook allows an extension to modify the clusters and secrets in the xDS config.
This allows for inserting clusters that may change along with extension specific configuration to be dynamically created rather than
using custom bootstrap config which would be sufficient for clusters that are static and not prone to have their configurations changed.
An example of how this may be used is to inject a cluster that will be used by an ext_authz http filter created by the extension.
The list of clusters and secrets returned by the extension are used as the final list of all clusters and secrets
This hook is always executed when an extension is loaded that has added Translation
to the EnvoyProxy.extensionManager.hooks.xdsTranslator.post
.
// PostTranslateModifyRequest currently sends only clusters and secrets to an extension.
// The extension is free to add/modify/remove the resources it received.
message PostTranslateModifyRequest {
PostTranslateExtensionContext post_translate_context = 1;
repeated envoy.config.cluster.v3.Cluster clusters = 2;
repeated envoy.extensions.transport_sockets.tls.v3.Secret secrets = 3;
}
// PostTranslateModifyResponse is the expected response from an extension and contains
// the full list of xDS clusters and secrets to be used for the xDS config.
message PostTranslateModifyResponse {
repeated envoy.config.cluster.v3.Cluster clusters = 1;
repeated envoy.extensions.transport_sockets.tls.v3.Secret secrets = 2;
}
Extension Service
Currently, an extension must implement all of the following hooks although it may return the input(s) it received
if no modification of the resource is desired. A future expansion of the extension hooks will allow an Extension to specify
with config which Hooks it would like to “subscribe” to and which Hooks it does not wish to support. These specific Hooks were chosen
in order to provide extensions with the ability to have both broad and specific control over xDS resources and to minimize the amount of data being sent.
service EnvoyGatewayExtension {
rpc PostRouteModify (PostRouteModifyRequest) returns (PostRouteModifyResponse) {};
rpc PostVirtualHostModify(PostVirtualHostModifyRequest) returns (PostVirtualHostModifyResponse) {};
rpc PostHTTPListenerModify(PostHTTPListenerModifyRequest) returns (PostHTTPListenerModifyResponse) {};
rpc PostTranslateModify(PostTranslateModifyRequest) returns (PostTranslateModifyResponse) {};
}
Design Decisions
- Envoy Gateway watches new custom resources introduced by a loaded extension and passes the resources back to the extension when they are used.
- This decision was made to solve the problem about how resources introduced by an extension get watched. If an extension server watches its own resources then it would need some way to trigger an Envoy Gateway reconfigure when a resource that Envoy Gateway is not watching gets updated. Having Envoy Gateway watch all resources removes any concern about creating race confitions or reconcile loops that would result from Envoy Gateway and the extension server both having so much separate state that needs to be synchronized.
- The Extension Server takes ownership of producing the correct xDS configuration in the hook responses
- The Extension Server will be responsible for ensuring the performance of the hook processing time
- The Post xDS level gRPC hooks all currently send a context field even though it contains nothing for several hooks. These fields exist so that they can be updated in the future to pass
additional information to extensions as new use-cases and needs are discovered.
- The initial design supplies the scaffolding for both “pre xDS” and “post xDS” hooks. Only the post hooks are currently implemented which operate on xDS resources after they have been generated.
The pre hooks will be implemented at a later date along with one or more hooks in the infra manager. The infra manager level hook(s) will exist to power use-cases such as dynamically creating Deployments/Services for the extension the
whenever Envoy Gateway creates an instance of Envoy Proxy. An extension developer might want to take advantage of this functionality to inject a new authorization service as a sidecar on the Envoy Proxy deployment for reduced latency.
- Multiple extensions are not be supported at the same time. Preventing conflict between multiple extensions that are mangling xDS resources is too difficult to ensure compatibility with and is likely to only generate issues.
Known Challenges
Extending Envoy Gateway by using an external extension server which makes use of hook points in Envoy Gateway does come with a few trade-offs. One known trade-off is the impact of the time that it takes for the hook calls to be executed. Since an extension would make use of hook points in Envoy Gateway that use gRPC for communication, the time it takes to perform these requests could become a concern for some extension developers. One way to minimize the request time of the hook calls is to load the extension server as a sidecar to Envoy Gateway to minimize the impact of networking on the hook calls.
1.13 - EnvoyPatchPolicy
Overview
This design introduces the EnvoyPatchPolicy
API allowing users to modify the generated Envoy xDS Configuration
that Envoy Gateway generates before sending it to Envoy Proxy.
Envoy Gateway allows users to configure networking and security intent using the
upstream Gateway API as well as implementation specific Extension APIs defined in this project
to provide a more batteries included experience for application developers.
- These APIs are an abstracted version of the underlying Envoy xDS API to provide a better user experience for the application developer, exposing and setting only a subset of the fields for a specific feature, sometimes in a opinionated way (e.g RateLimit)
- These APIs do not expose all the features capabilities that Envoy has either because these features are desired but the API
is not defined yet or the project cannot support such an extensive list of features.
To alleviate this problem, and provide an interim solution for a small section of advanced users who are well versed in
Envoy xDS API and its capabilities, this API is being introduced.
Goals
- Add an API allowing users to modify the generated xDS Configuration
Non Goals
- Support multiple patch mechanisms
Implementation
EnvoyPatchPolicy
is a Direct Policy Attachment type API that can be used to extend Gateway API
Modifications to the generated xDS configuration can be provided as a JSON Patch which is defined in
RFC 6902. This patching mechanism has been adopted in Kubernetes as well as Kustomize to update
resource objects.
Example
Here is an example highlighting how a user can configure global ratelimiting using an external rate limit service using this API.
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
name: ratelimit-patch-policy
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
type: JSONPatch
jsonPatches:
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
# The listener name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
name: default/eg/http
operation:
op: add
path: "/default_filter_chain/filters/0/typed_config/http_filters/0"
value:
name: "envoy.filters.http.ratelimit"
typed_config:
"@type": "type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit"
domain: "eag-ratelimit"
failure_mode_deny: true
timeout: 1s
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate-limit-cluster
transport_api_version: V3
- type: "type.googleapis.com/envoy.config.route.v3.RouteConfiguration"
# The route name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
name: default/eg/http
operation:
op: add
path: "/virtual_hosts/0/rate_limits"
value:
- actions:
- remote_address: {}
- type: "type.googleapis.com/envoy.config.cluster.v3.Cluster"
name: rate-limit-cluster
operation:
op: add
path: ""
value:
name: rate-limit-cluster
type: STRICT_DNS
connect_timeout: 10s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
load_assignment:
cluster_name: rate-limit-cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: ratelimit.svc.cluster.local
port_value: 8081
Verification
- Offline - Leverage egctl x translate to ensure that the
EnvoyPatchPolicy
can be successfully applied and the desired
output xDS is created. - Runtime - Use the
Status
field within EnvoyPatchPolicy
to highlight whether the patch was applied successfully or not.
State of the World
- Istio - Supports the EnvoyFilter API which allows users to customize the output xDS using patches and proto based merge
semantics.
Design Decisions
- This API will only support a single
targetRef
and can bind to only a Gateway
resource. This simplifies reasoning of how
patches will work. - This API will always be an experimental API and cannot be graduated into a stable API because Envoy Gateway cannot garuntee
- that the naming scheme for the generated resources names will not change across releases
- that the underlying Envoy Proxy API will not change across releases
- This API needs to be explicitly enabled using the EnvoyGateway API
Open Questions
- Should the value only support JSON or YAML as well (which is a JSON superset) ?
Alternatives
1.14 - Observability: Accesslog
Overview
Envoy supports extensible accesslog to different sinks, File, gRPC etc. Envoy supports customizable access log formats using predefined fields as well as arbitrary HTTP request and response headers. Envoy supports several built-in access log filters and extension filters that are registered at runtime.
Envoy Gateway leverages Gateway API for configuring managed Envoy proxies. Gateway API defines core, extended, and implementation-specific API support levels for implementers such as Envoy Gateway to expose features. Since accesslog is not covered by Core
or Extended
APIs, EG should provide an easy to config access log formats and sinks per EnvoyProxy
.
Goals
- Support send accesslog to
File
or OpenTelemetry
backend - TODO: Support access log filters base on CEL expression
Non-Goals
Use-Cases
- Configure accesslog for a
EnvoyProxy
to File
- Configure accesslog for a
EnvoyProxy
to OpenTelemetry
backend - Configure multi accesslog providers for a
EnvoyProxy
ProxyAccessLog API Type
type ProxyAccessLog struct {
// Disable disables access logging for managed proxies if set to true.
Disable bool `json:"disable,omitempty"`
// Settings defines accesslog settings for managed proxies.
// If unspecified, will send default format to stdout.
// +optional
Settings []ProxyAccessLogSetting `json:"settings,omitempty"`
}
type ProxyAccessLogSetting struct {
// Format defines the format of accesslog.
Format ProxyAccessLogFormat `json:"format"`
// Sinks defines the sinks of accesslog.
// +kubebuilder:validation:MinItems=1
Sinks []ProxyAccessLogSink `json:"sinks"`
}
type ProxyAccessLogFormatType string
const (
// ProxyAccessLogFormatTypeText defines the text accesslog format.
ProxyAccessLogFormatTypeText ProxyAccessLogFormatType = "Text"
// ProxyAccessLogFormatTypeJSON defines the JSON accesslog format.
ProxyAccessLogFormatTypeJSON ProxyAccessLogFormatType = "JSON"
// TODO: support format type "mix" in the future.
)
// ProxyAccessLogFormat defines the format of accesslog.
// +union
type ProxyAccessLogFormat struct {
// Type defines the type of accesslog format.
// +kubebuilder:validation:Enum=Text;JSON
// +unionDiscriminator
Type ProxyAccessLogFormatType `json:"type,omitempty"`
// Text defines the text accesslog format, following Envoy accesslog formatting,
// empty value results in proxy's default access log format.
// It's required when the format type is "Text".
// Envoy [command operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators) may be used in the format.
// The [format string documentation](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#config-access-log-format-strings) provides more information.
// +optional
Text *string `json:"text,omitempty"`
// JSON is additional attributes that describe the specific event occurrence.
// Structured format for the envoy access logs. Envoy [command operators](https://www.envoyproxy.io/docs/envoy/latest/configuration/observability/access_log/usage#command-operators)
// can be used as values for fields within the Struct.
// It's required when the format type is "JSON".
// +optional
JSON map[string]string `json:"json,omitempty"`
}
type ProxyAccessLogSinkType string
const (
// ProxyAccessLogSinkTypeFile defines the file accesslog sink.
ProxyAccessLogSinkTypeFile ProxyAccessLogSinkType = "File"
// ProxyAccessLogSinkTypeOpenTelemetry defines the OpenTelemetry accesslog sink.
ProxyAccessLogSinkTypeOpenTelemetry ProxyAccessLogSinkType = "OpenTelemetry"
)
type ProxyAccessLogSink struct {
// Type defines the type of accesslog sink.
// +kubebuilder:validation:Enum=File;OpenTelemetry
Type ProxyAccessLogSinkType `json:"type,omitempty"`
// File defines the file accesslog sink.
// +optional
File *FileEnvoyProxyAccessLog `json:"file,omitempty"`
// OpenTelemetry defines the OpenTelemetry accesslog sink.
// +optional
OpenTelemetry *OpenTelemetryEnvoyProxyAccessLog `json:"openTelemetry,omitempty"`
}
type FileEnvoyProxyAccessLog struct {
// Path defines the file path used to expose envoy access log(e.g. /dev/stdout).
// Empty value disables accesslog.
Path string `json:"path,omitempty"`
}
// TODO: consider reuse ExtensionService?
type OpenTelemetryEnvoyProxyAccessLog struct {
// Host define the extension service hostname.
Host string `json:"host"`
// Port defines the port the extension service is exposed on.
//
// +optional
// +kubebuilder:validation:Minimum=0
// +kubebuilder:default=4317
Port int32 `json:"port,omitempty"`
// Resources is a set of labels that describe the source of a log entry, including envoy node info.
// It's recommended to follow [semantic conventions](https://opentelemetry.io/docs/reference/specification/resource/semantic_conventions/).
// +optional
Resources map[string]string `json:"resources,omitempty"`
// TODO: support more OpenTelemetry accesslog options(e.g. TLS, auth etc.) in the future.
}
Example
- The following is an example to disable access log.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: disable-accesslog
namespace: envoy-gateway-system
spec:
telemetry:
accessLog:
disable: true
- The following is an example with text format access log.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: text-access-logging
namespace: envoy-gateway-system
spec:
telemetry:
accessLog:
settings:
- format:
type: Text
text: |
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"
sinks:
- type: File
file:
path: /dev/stdout
- The following is an example with json format access log.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: json-access-logging
namespace: envoy-gateway-system
spec:
telemetry:
accessLog:
settings:
- format:
type: JSON
json:
status: "%RESPONSE_CODE%"
message: "%LOCAL_REPLY_BODY%"
sinks:
- type: File
file:
path: /dev/stdout
- The following is an example with OpenTelemetry format access log.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: otel-access-logging
namespace: envoy-gateway-system
spec:
telemetry:
accessLog:
settings:
- format:
type: Text
text: |
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"
sinks:
- type: OpenTelemetry
openTelemetry:
host: otel-collector.monitoring.svc.cluster.local
port: 4317
resources:
k8s.cluster.name: "cluster-1"
- The following is an example of sending same format to different sinks.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: multi-sinks
namespace: envoy-gateway-system
spec:
telemetry:
accessLog:
settings:
- format:
type: Text
text: |
[%START_TIME%] "%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% "%REQ(X-FORWARDED-FOR)%" "%REQ(USER-AGENT)%" "%REQ(X-REQUEST-ID)%" "%REQ(:AUTHORITY)%" "%UPSTREAM_HOST%"
sinks:
- type: File
file:
path: /dev/stdout
- type: OpenTelemetry
openTelemetry:
host: otel-collector.monitoring.svc.cluster.local
port: 4317
resources:
k8s.cluster.name: "cluster-1"
1.15 - Observability: Metrics
Overview
Envoy provide robust platform for metrics, Envoy support three different kinds of stats: counter, gauges, histograms.
Envoy enables prometheus format output via the /stats/prometheus
admin endpoint.
Envoy support different kinds of sinks, but EG will only support Open Telemetry sink.
Envoy Gateway leverages Gateway API for configuring managed Envoy proxies. Gateway API defines core, extended, and implementation-specific API support levels for implementers such as Envoy Gateway to expose features. Since metrics is not covered by Core
or Extended
APIs, EG should provide an easy to config metrics per EnvoyProxy
.
Goals
- Support expose metrics in prometheus way(reuse probe port).
- Support Open Telemetry stats sink.
Non-Goals
- Support other stats sink.
Use-Cases
- Enable prometheus metric by default
- Disable prometheus metric
- Push metrics via Open Telemetry Sink
- TODO: Customize histogram buckets of target metric
- TODO: Support stats matcher
ProxyMetric API Type
type ProxyMetrics struct {
// Prometheus defines the configuration for Admin endpoint `/stats/prometheus`.
Prometheus *PrometheusProvider `json:"prometheus,omitempty"`
// Sinks defines the metric sinks where metrics are sent to.
Sinks []MetricSink `json:"sinks,omitempty"`
}
type MetricSinkType string
const (
MetricSinkTypeOpenTelemetry MetricSinkType = "OpenTelemetry"
)
type MetricSink struct {
// Type defines the metric sink type.
// EG currently only supports OpenTelemetry.
// +kubebuilder:validation:Enum=OpenTelemetry
// +kubebuilder:default=OpenTelemetry
Type MetricSinkType `json:"type"`
// OpenTelemetry defines the configuration for OpenTelemetry sink.
// It's required if the sink type is OpenTelemetry.
OpenTelemetry *OpenTelemetrySink `json:"openTelemetry,omitempty"`
}
type OpenTelemetrySink struct {
// Host define the service hostname.
Host string `json:"host"`
// Port defines the port the service is exposed on.
//
// +optional
// +kubebuilder:validation:Minimum=0
// +kubebuilder:validation:Maximum=65535
// +kubebuilder:default=4317
Port int32 `json:"port,omitempty"`
// TODO: add support for customizing OpenTelemetry sink in https://www.envoyproxy.io/docs/envoy/latest/api-v3/extensions/stat_sinks/open_telemetry/v3/open_telemetry.proto#envoy-v3-api-msg-extensions-stat-sinks-open-telemetry-v3-sinkconfig
}
type PrometheusProvider struct {
// Disable the Prometheus endpoint.
Disable bool `json:"disable,omitempty"`
}
Example
- The following is an example to disable prometheus metric.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: prometheus
namespace: envoy-gateway-system
spec:
telemetry:
metrics:
prometheus:
disable: true
- The following is an example to send metric via Open Telemetry sink.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: otel-sink
namespace: envoy-gateway-system
spec:
telemetry:
metrics:
sinks:
- type: OpenTelemetry
openTelemetry:
host: otel-collector.monitoring.svc.cluster.local
port: 4317
1.16 - Observability: Tracing
Overview
Envoy supports extensible tracing to different sinks, Zipkin, OpenTelemetry etc. Overview of Envoy tracing can be found here.
Envoy Gateway leverages Gateway API for configuring managed Envoy proxies. Gateway API defines core, extended, and implementation-specific API support levels for implementers such as Envoy Gateway to expose features. Since tracing is not covered by Core
or Extended
APIs, EG should provide an easy to config tracing per EnvoyProxy
.
Only OpenTelemetry sink can be configured currently, you can use OpenTelemetry Collector to export to other tracing backends.
Goals
- Support send tracing to
OpenTelemetry
backend - Support configurable sampling rate
- Support propagate tag from
Literal
, Environment
and Request Header
Non-Goals
- Support other tracing backend, e.g.
Zipkin
, Jaeger
Use-Cases
- Configure accesslog for a
EnvoyProxy
to File
ProxyAccessLog API Type
type ProxyTracing struct {
// SamplingRate controls the rate at which traffic will be
// selected for tracing if no prior sampling decision has been made.
// Defaults to 100, valid values [0-100]. 100 indicates 100% sampling.
// +kubebuilder:validation:Minimum=0
// +kubebuilder:validation:Maximum=100
// +kubebuilder:default=100
// +optional
SamplingRate *uint32 `json:"samplingRate,omitempty"`
// CustomTags defines the custom tags to add to each span.
// If provider is kubernetes, pod name and namespace are added by default.
CustomTags map[string]CustomTag `json:"customTags,omitempty"`
// Provider defines the tracing provider.
// Only OpenTelemetry is supported currently.
Provider TracingProvider `json:"provider"`
}
type TracingProviderType string
const (
TracingProviderTypeOpenTelemetry TracingProviderType = "OpenTelemetry"
)
type TracingProvider struct {
// Type defines the tracing provider type.
// EG currently only supports OpenTelemetry.
// +kubebuilder:validation:Enum=OpenTelemetry
// +kubebuilder:default=OpenTelemetry
Type TracingProviderType `json:"type"`
// Host define the provider service hostname.
Host string `json:"host"`
// Port defines the port the provider service is exposed on.
//
// +optional
// +kubebuilder:validation:Minimum=0
// +kubebuilder:default=4317
Port int32 `json:"port,omitempty"`
}
type CustomTagType string
const (
// CustomTagTypeLiteral adds hard-coded value to each span.
CustomTagTypeLiteral CustomTagType = "Literal"
// CustomTagTypeEnvironment adds value from environment variable to each span.
CustomTagTypeEnvironment CustomTagType = "Environment"
// CustomTagTypeRequestHeader adds value from request header to each span.
CustomTagTypeRequestHeader CustomTagType = "RequestHeader"
)
type CustomTag struct {
// Type defines the type of custom tag.
// +kubebuilder:validation:Enum=Literal;Environment;RequestHeader
// +unionDiscriminator
// +kubebuilder:default=Literal
Type CustomTagType `json:"type"`
// Literal adds hard-coded value to each span.
// It's required when the type is "Literal".
Literal *LiteralCustomTag `json:"literal,omitempty"`
// Environment adds value from environment variable to each span.
// It's required when the type is "Environment".
Environment *EnvironmentCustomTag `json:"environment,omitempty"`
// RequestHeader adds value from request header to each span.
// It's required when the type is "RequestHeader".
RequestHeader *RequestHeaderCustomTag `json:"requestHeader,omitempty"`
// TODO: add support for Metadata tags in the future.
// EG currently doesn't support metadata for route or cluster.
}
// LiteralCustomTag adds hard-coded value to each span.
type LiteralCustomTag struct {
// Value defines the hard-coded value to add to each span.
Value string `json:"value"`
}
// EnvironmentCustomTag adds value from environment variable to each span.
type EnvironmentCustomTag struct {
// Name defines the name of the environment variable which to extract the value from.
Name string `json:"name"`
// DefaultValue defines the default value to use if the environment variable is not set.
// +optional
DefaultValue *string `json:"defaultValue,omitempty"`
}
// RequestHeaderCustomTag adds value from request header to each span.
type RequestHeaderCustomTag struct {
// Name defines the name of the request header which to extract the value from.
Name string `json:"name"`
// DefaultValue defines the default value to use if the request header is not set.
// +optional
DefaultValue *string `json:"defaultValue,omitempty"`
}
Example
- The following is an example to config tracing.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: tracing
namespace: envoy-gateway-system
spec:
telemetry:
tracing:
# sample 100% of requests
samplingRate: 100
provider:
host: otel-collector.monitoring.svc.cluster.local
port: 4317
customTags:
# This is an example of using a literal as a tag value
key1:
type: Literal
literal:
value: "val1"
# This is an example of using an environment variable as a tag value
env1:
type: Environment
environment:
name: ENV1
defaultValue: "-"
# This is an example of using a header value as a tag value
header1:
type: RequestHeader
requestHeader:
name: X-Header-1
defaultValue: "-"
1.17 - Rate Limit Design
Overview
Rate limit is a feature that allows the user to limit the number of incoming requests
to a predefined value based on attributes within the traffic flow.
Here are some reasons why a user may want to implement Rate limits
- To prevent malicious activity such as DDoS attacks.
- To prevent applications and its resources (such as a database) from getting overloaded.
- To create API limits based on user entitlements.
Scope Types
The rate limit type here describes the scope of rate limits.
Global - In this case, the rate limit is common across all the instances of Envoy proxies
where its applied i.e. if the data plane has 2 replicas of Envoy running, and the rate limit is
10 requests/second, this limit is common and will be hit if 5 requests pass through the first replica
and 5 requests pass through the second replica within the same second.
Local - In this case, the rate limits are specific to each instance/replica of Envoy running.
Note - This is not part of the initial design and will be added as a future enhancement.
Match Types
Rate limit a specific traffic flow
- Here is an example of a ratelimit implemented by the application developer to limit a specific user
by matching on a custom
x-user-id
header with a value set to one
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ratelimit-specific-user
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
limit:
requests: 10
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- www.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /foo
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: RateLimitFilter
name: ratelimit-specific-user
backendRefs:
- name: backend
port: 3000
Rate limit all traffic flows
- Here is an example of a rate limit implemented by the application developer that limits the total requests made
to a specific route to safeguard health of internal application components. In this case, no specific
headers
match
is specified, and the rate limit is applied to all traffic flows accepted by this HTTPRoute
.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ratelimit-all-requests
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- limit:
requests: 1000
unit: Second
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- www.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /foo
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: RateLimitFilter
name: ratelimit-all-requests
backendRefs:
- name: backend
port: 3000
Rate limit per distinct value
- Here is an example of a rate limit implemented by the application developer to limit any unique user
by matching on a custom
x-user-id
header. Here, user A (recognised from the traffic flow using the header
x-user-id
and value a
) will be rate limited at 10 requests/hour and so will user B
(recognised from the traffic flow using the header x-user-id
and value b
).
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ratelimit-per-user
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- type: Distinct
name: x-user-id
limit:
requests: 10
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- www.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /foo
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: RateLimitFilter
name: ratelimit-per-user
backendRefs:
- name: backend
port: 3000
Rate limit per source IP
- Here is an example of a rate limit implemented by the application developer that limits the total requests made
to a specific route by matching on source IP. In this case, requests from
x.x.x.x
will be rate limited at 10 requests/hour.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ratelimit-per-ip
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- sourceIP: x.x.x.x/32
limit:
requests: 10
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- www.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /foo
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: RateLimitFilter
name: ratelimit-per-user
backendRefs:
- name: backend
port: 3000
Rate limit based on JWT claims
- Here is an example of rate limit implemented by the application developer that limits the total requests made
to a specific route by matching on the jwt claim. In this case, requests with jwt claim information of
{"name":"John Doe"}
will be rate limited at 10 requests/hour.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: jwt-example
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
jwt:
providers:
- name: example
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
claimToHeaders:
- claim: name
header: custom-request-header
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ratelimit-specific-user
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: custom-request-header
value: John Doe
limit:
requests: 10
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /foo
Multiple RateLimitFilters, rules and clientSelectors
- Users can create multiple
RateLimitFilter
s and apply it to the same HTTPRoute
. In such a case each
RateLimitFilter
will be applied to the route and matched (and limited) in a mutually exclusive way, independent of each other. - Rate limits are applied for each
RateLimitFilter
rule
when ALL the conditions under clientSelectors
hold true.
Here’s an example highlighting this -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ratelimit-all-safeguard-app
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- limit:
requests: 100
unit: Hour
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: ratelimit-per-user
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- type: Distinct
name: x-user-id
limit:
requests: 100
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- www.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /foo
filters:
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: RateLimitFilter
name: ratelimit-per-user
- type: ExtensionRef
extensionRef:
group: gateway.envoyproxy.io
kind: RateLimitFilter
name: ratelimit-all-safeguard-app
backendRefs:
- name: backend
port: 3000
- The user has created two
RateLimitFilter
s and has attached it to a HTTPRoute
- one(ratelimit-all-safeguard-app
) to
ensure that the backend does not get overwhelmed with requests, any excess requests are rate limited irrespective of
the attributes within the traffic flow, and another(ratelimit-per-user
) to rate limit each distinct user client
who can be differentiated using the x-user-id
header, to ensure that each client does not make excessive requests to the backend. - If user
baz
(identified with the header and value of x-user-id: baz
) sends 90 requests within the first second, and
user bar
sends 11 more requests during that same interval of 1 second, and user bar
sends the 101th request within that second,
the rule defined in ratelimit-all-safeguard-app
gets activated and Envoy Gateway will ratelimit the request sent by bar
(and any other
request sent within that 1 second). After 1 second, the rate limit counter associated with the ratelimit-all-safeguard-app
rule
is reset and again evaluated. - If user
bar
also ends up sending 90 more requests within the hour, summing up bar
’s total request count to 101, the rate limit rule
defined within ratelimit-per-user
will get activated, and bar
’s requests will be rate limited again until the hour interval ends. - Within the same above hour, if
baz
sends 991 more requests, summing up baz
’s total request count to 1001, the rate limit rule defined
within ratelimit-per-user
will get activated for baz
, and baz
’s requests will also be rate limited until the hour interval ends.
Design Decisions
- The initial design uses an Extension filter to apply the Rate Limit functionality on a specific HTTPRoute.
This was preferred over the PolicyAttachment extension mechanism, because it is unclear whether Rate Limit
will be required to be enforced or overridden by the platform administrator or not.
- The RateLimitFilter can only be applied as a filter to a HTTPRouteRule, applying it across all backends within a HTTPRoute
and cannot be applied a filter within a HTTPBackendRef for a specific backend.
- The HTTPRoute API has a matches field within each rule to select a specific traffic flow to be routed to
the destination backend. The RateLimitFilter API that can be attached to an HTTPRoute via an extensionRef filter,
also has a
clientSelectors
field within each rule
to select attributes within the traffic flow to rate limit specific clients.
The two levels of selectors/matches allow for flexibility and aim to hold match information specific to its use, allowing the author/owner
of each configuration to be different. It also allows the clientSelectors
field within the RateLimitFilter to be enhanced with other matchable
attribute such as IP subnet in the future that are not relevant in the HTTPRoute API.
Implementation Details
Global Rate limiting
- Global rate limiting in Envoy Proxy can be achieved using the following -
- Actions can be configured per xDS Route.
- If the match criteria defined within these actions is met for a specific HTTP Request, a set of key value pairs called descriptors
defined within the above actions is sent to a remote rate limit service, whose configuration (such as the URL for the rate limit service) is defined
using a rate limit filter.
- Based on information received by the rate limit service and its programmed configuration, a decision is computed, whether to rate limit
the HTTP Request or not, and is sent back to Envoy, which enforces this decision on the data plane.
- Envoy Gateway will leverage this Envoy Proxy feature by -
- Translating the user facing RateLimitFilter API into Rate limit Actions as well as Rate limit service configuration to implement
the desired API intent.
- Envoy Gateway will use the existing reference implementation of the rate limit service.
- The Infrastructure administrator will need to enable the rate limit service using new settings that will be defined in the EnvoyGateway config API.
- The xDS IR will be enhanced to hold the user facing rate limit intent.
- The xDS Translator will be enhanced to translate the rate limit field within the xDS IR into Rate limit Actions as well as instantiate the rate limit filter.
- A new runner called
rate-limit
will be added that subscribes to the xDS IR messages and translates it into a new Rate Limit Infra IR which contains
the rate limit service configuration as well as other information needed to deploy the rate limit service. - The infrastructure service will be enhanced to subscribe to the Rate Limit Infra IR and deploy a provider specific rate limit service runnable entity.
- A Status field within the RateLimitFilter API will be added to reflect whether the specific configuration was programmed correctly in these multiple locations or not.
1.18 - Running Envoy Gateway locally
Overview
Today, Envoy Gateway runs only on Kubernetes. This is an ideal solution
when the applications are running in Kubernetes.
However there might be cases when the applications are running on the host which would
require Envoy Gateway to run locally.
Goals
- Define an API to allow Envoy Gateway to retrieve configuration while running locally.
- Define an API to allow Envoy Gateway to deploy the managed Envoy Proxy fleet on the host
machine.
Non Goals
- Support multiple ways to retrieve configuration while running locally.
- Support multiple ways to deploy the Envoy Proxy fleet locally on the host.
API
- The
provider
field within the EnvoyGateway
configuration only supports
Kubernetes
today which provides two features - the ability to retrieve
resources from the Kubernetes API Server as well as deploy the managed
Envoy Proxy fleet on Kubernetes. - This document proposes adding a new top level
provider
type called Custom
with two fields called resource
and infrastructure
to allow the user to configure
the sub providers for providing resource configuration and an infrastructure to deploy
the Envoy Proxy data plane in. - A
File
resource provider will be introduced to enable retrieving configuration locally
by reading from the configuration from a file. - A
Host
infrastructure provider will be introduced to allow Envoy Gateway to spawn a
Envoy Proxy child process on the host.
Here is an example configuration
provider:
type: Custom
custom:
resource:
type: File
file:
paths:
- "config.yaml"
infrastructure:
type: Host
host: {}
1.19 - SecurityPolicy
Overview
This design document introduces the SecurityPolicy
API allowing system administrators to configure
authentication and authorization policies to the traffic entering the gateway.
Goals
- Add an API definition to hold settings for configuring authentication and authorization rules
on the traffic entering the gateway.
Non Goals
- Define the API configuration fields in this API.
Implementation
SecurityPolicy
is a Policy Attachment type API that can be used to extend Gateway API
to define authentication and authorization rules.
Example
Here is an example highlighting how a user can configure this API.
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: https
protocol: HTTPS
port: 443
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: jwt-authn-policy
namespace: default
spec:
jwt:
providers:
- name: example
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
Features / API Fields
Here is a list of features that can be included in this API
- JWT based authentication
- OIDC Authentication
- External Authorization
- Basic Auth
- API Key Auth
- CORS
Design Decisions
- This API will only support a single
targetRef
and can bind to a Gateway
resource or a HTTPRoute
or GRPCRoute
. - This API resource MUST be part of same namespace as the targetRef resource
- There can be only be ONE policy resource attached to a specific targetRef e.g. a
Listener
(section) within a Gateway
- If the policy targets a resource but cannot attach to it, this information should be reflected
in the Policy Status field using the
Conflicted=True
condition. - If multiple polices target the same resource, the oldest resource (based on creation timestamp) will
attach to the Gateway Listeners, the others will not.
- If Policy A has a
targetRef
that includes a sectionName
i.e.
it targets a specific Listener within a Gateway
and Policy B has a targetRef
that targets the same
entire Gateway then- Policy A will be applied/attached to the specific Listener defined in the
targetRef.SectionName
- Policy B will be applied to the remaining Listeners within the Gateway. Policy B will have an additional
status condition
Overridden=True
.
- A Policy targeting the most specific scope wins over a policy targeting a lesser specific scope.
i.e. A Policy targeting a xRoute (
HTTPRoute
or GRPCRoute
) overrides a Policy targeting a Listener that is
this route’s parentRef which in turn overrides a Policy targeting the Gateway the listener/section is a part of.
Alternatives
- The project can indefinitely wait for these configuration parameters to be part of the Gateway API.
1.20 - TCP and UDP Proxy Design
Even though most of the use cases for Envoy Gateway are at Layer-7, Envoy Gateway can also work at Layer-4 to proxy TCP
and UDP traffic. This document will explore the options we have when operating Envoy Gateway at Layer-4 and explain the
design decision.
Envoy can work as a non-transparent proxy or a transparent proxy for both TCP
and UDP
, so ideally, Envoy Gateway should also be able to work in these two modes:
Non-transparent Proxy Mode
For TCP, Envoy terminates the downstream connection, connects the upstream with its own IP address, and proxies the
TCP traffic from the downstream to the upstream.
For UDP, Envoy receives UDP datagrams from the downstream, and uses its own IP address as the sender IP address when
proxying the UDP datagrams to the upstream.
In this mode, the upstream will see Envoy’s IP address and port.
Transparent Proxy Mode
For TCP, Envoy terminates the downstream connection, connects the upstream with the downstream IP address, and proxies
the TCP traffic from the downstream to the upstream.
For UDP, Envoy receives UDP datagrams from the downstream, and uses the downstream IP address as the sender IP address
when proxying the UDP datagrams to the upstream.
In this mode, the upstream will see the original downstream IP address and Envoy’s mac address.
Note: Even in transparent mode, the upstream can’t see the port number of the downstream because Envoy doesn’t forward
the port number.
The Implications of Transparent Proxy Mode
Escalated Privilege
Envoy needs to bind to the downstream IP when connecting to the upstream, which means Envoy requires escalated
CAP_NET_ADMIN privileges. This is often considered as a bad security practice and not allowed in some sensitive deployments.
Routing
The upstream can see the original source IP, but the original port number won’t be passed, so the return
traffic from the upstream must be routed back to Envoy because only Envoy knows how to send the return traffic back
to the right port number of the downstream, which requires routing at the upstream side to be set up.
In a Kubernetes cluster, Envoy Gateway will have to carefully cooperate with CNI plugins to get the routing right.
The Design Decision (For Now)
The implementation will only support proxying in non-transparent mode i.e. the backend will see the source IP and
port of the deployed Envoy instance instead of the client.
2 - User Guides
This section includes User Guides of Envoy Gateway.
2.1 - Quickstart
This guide will help you get started with Envoy Gateway in a few simple steps.
Prerequisites
A Kubernetes cluster.
Note: Refer to the Compatibility Matrix for supported Kubernetes versions.
Note: In case your Kubernetes cluster, does not have a LoadBalancer implementation, we recommend installing one
so the Gateway
resource has an Address associated with it. We recommend using MetalLB.
Installation
Install the Gateway API CRDs and Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
Install the GatewayClass, Gateway, HTTPRoute and example app:
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/v0.6.0/quickstart.yaml -n default
Note: quickstart.yaml
defines that Envoy Gateway will listen for
traffic on port 80 on its globally-routable IP address, to make it easy to use
browsers to test Envoy Gateway. When Envoy Gateway sees that its Listener is
using a privileged port (<1024), it will map this internally to an
unprivileged port, so that Envoy Gateway doesn’t need additional privileges.
It’s important to be aware of this mapping, since you may need to take it into
consideration when debugging.
Testing the Configuration
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8888:80 &
Curl the example app through Envoy proxy:
curl --verbose --header "Host: www.example.com" http://localhost:8888/get
External LoadBalancer Support
You can also test the same functionality by sending traffic to the External IP. To get the external IP of the
Envoy service, run:
export GATEWAY_HOST=$(kubectl get svc/${ENVOY_SERVICE} -n envoy-gateway-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
In certain environments, the load balancer may be exposed using a hostname, instead of an IP address. If so, replace
ip
in the above command with hostname
.
Curl the example app through Envoy proxy:
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get
Clean-Up
Use the steps in this section to uninstall everything from the quickstart guide.
Delete the GatewayClass, Gateway, HTTPRoute and Example App:
kubectl delete -f https://github.com/envoyproxy/gateway/releases/download/v0.6.0/quickstart.yaml --ignore-not-found=true
Delete the Gateway API CRDs and Envoy Gateway:
helm uninstall eg -n envoy-gateway-system
Next Steps
Checkout the Developer Guide to get involved in the project.
2.2 - CORS
This guide provides instructions for configuring Cross-Origin Resource Sharing (CORS) on Envoy Gateway.
CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different
domain.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure CORS.
This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Configuration
The below example defines a SecurityPolicy that allows CORS requests from www.foo.com
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: cors-example
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: backend
cors:
allowOrigins:
- type: Exact
value: "www.foo.com"
allowMethods:
- GET
- POST
allowHeaders:
- "x-header-1"
- "x-header-2"
exposeHeaders:
- "x-header-3"
- "x-header-4"
EOF
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/cors-example -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart guide is set. If not, follow the
Quickstart instructions to set the variable.
Verify that the CORS headers are present in the response of the OPTIONS request from http://www.foo.com
:
curl -H "Origin: http://www.foo.com" \
-H "Host: www.example.com" \
-H "Access-Control-Request-Method: GET" \
-X OPTIONS -v -s \
http://$GATEWAY_HOST \
1> /dev/null
You should see the below response, indicating that the request from http://www.foo.com
is allowed:
< access-control-allow-origin: http://www.foo.com
< access-control-allow-methods: GET, POST
< access-control-allow-headers: x-header-1, x-header-2
< access-control-max-age: 86400
< access-control-expose-headers: x-header-3, x-header-4
If you try to send a request from http://www.bar.com
, you should see the below response:
curl -H "Origin: http://www.bar.com" \
-H "Host: www.example.com" \
-H "Access-Control-Request-Method: GET" \
-X OPTIONS -v -s \
http://$GATEWAY_HOST \
1> /dev/null
You won’t see any CORS headers in the response, indicating that the request from http://www.bar.com
was not allowed.
Note: CORS specification requires that the browsers to send a preflight request to the server to ask if it’s allowed
to access the limited resource in another domains. The browsers are supposed to follow the response from the server to
determine whether to send the actual request or not. The CORS filter only response to the preflight requests according to
its configuration. It won’t deny any requests. The browsers are responsible for enforcing the CORS policy.
Clean-Up
Follow the steps from the Quickstart guide to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy:
kubectl delete securitypolicy/cors-example
Next Steps
Checkout the Developer Guide to get involved in the project.
2.3 - Customize EnvoyProxy
Envoy Gateway provides an EnvoyProxy CRD that can be linked to the ParametersRef
in GatewayClass, allowing cluster admins to customize the managed EnvoyProxy Deployment and
Service. To learn more about GatewayClass and ParametersRef, please refer to Gateway API documentation.
Installation
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Add GatewayClass ParametersRef
First, you need to add ParametersRef in GatewayClass, and refer to EnvoyProxy Config:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: custom-proxy-config
namespace: envoy-gateway-system
EOF
Customize EnvoyProxy Deployment Replicas
You can customize the EnvoyProxy Deployment Replicas via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
replicas: 2
EOF
After you apply the config, you should see the replicas of envoyproxy changes to 2.
And also you can dynamically change the value.
kubectl get deployment -l gateway.envoyproxy.io/owning-gateway-name=eg
Customize EnvoyProxy Image
You can customize the EnvoyProxy Image via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
container:
image: envoyproxy/envoy:v1.25-v0.6.0
EOF
After applying the config, you can get the deployment image, and see it has changed.
Customize EnvoyProxy Pod Annotations
You can customize the EnvoyProxy Pod Annotations via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
pod:
annotations:
custom1: deploy-annotation1
custom2: deploy-annotation2
EOF
After applying the config, you can get the envoyproxy pods, and see new annotations has been added.
Customize EnvoyProxy Deployment Resources
You can customize the EnvoyProxy Deployment Resources via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
container:
resources:
requests:
cpu: 150m
memory: 640Mi
limits:
cpu: 500m
memory: 1Gi
EOF
Customize EnvoyProxy Deployment Env
You can customize the EnvoyProxy Deployment Env via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
container:
env:
- name: env_a
value: env_a_value
- name: env_b
value: env_b_value
EOF
Envoy Gateway has provided two initial env
ENVOY_GATEWAY_NAMESPACE
and ENVOY_POD_NAME
for envoyproxy container.
After applying the config, you can get the envoyproxy deployment, and see resources has been changed.
Customize EnvoyProxy Deployment Volumes or VolumeMounts
You can customize the EnvoyProxy Deployment Volumes or VolumeMounts via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
container:
volumeMounts:
- mountPath: /certs
name: certs
readOnly: true
pod:
volumes:
- name: certs
secret:
secretName: envoy-cert
EOF
After applying the config, you can get the envoyproxy deployment, and see resources has been changed.
Customize EnvoyProxy Service Annotations
You can customize the EnvoyProxy Service Annotations via EnvoyProxy Config like:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyService:
annotations:
custom1: svc-annotation1
custom2: svc-annotation2
EOF
After applying the config, you can get the envoyproxy service, and see annotations has been added.
Customize EnvoyProxy Bootstrap Config
You can customize the EnvoyProxy bootstrap config via EnvoyProxy Config.
There are two ways to customize it:
- Replace: the whole bootstrap config will be replaced by the config you provided.
- Merge: the config you provided will be merged into the default bootstrap config.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: custom-proxy-config
namespace: envoy-gateway-system
spec:
bootstrap:
type: Replace
value: |
admin:
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 20000
dynamic_resources:
ads_config:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
lds_config:
ads: {}
resource_api_version: V3
cds_config:
ads: {}
resource_api_version: V3
static_resources:
clusters:
- connect_timeout: 10s
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: envoy-gateway
port_value: 18000
typed_extension_protocol_options:
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
"explicit_http_config":
"http2_protocol_options": {}
name: xds_cluster
type: STRICT_DNS
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
tls_params:
tls_maximum_protocol_version: TLSv1_3
tls_certificate_sds_secret_configs:
- name: xds_certificate
sds_config:
path_config_source:
path: "/sds/xds-certificate.json"
resource_api_version: V3
validation_context_sds_secret_config:
name: xds_trusted_ca
sds_config:
path_config_source:
path: "/sds/xds-trusted-ca.json"
resource_api_version: V3
layered_runtime:
layers:
- name: runtime-0
rtds_layer:
rtds_config:
ads: {}
resource_api_version: V3
name: runtime-0
EOF
You can use egctl translate
to get the default xDS Bootstrap configuration used by Envoy Gateway.
After applying the config, the bootstrap config will be overridden by the new config you provided.
Any errors in the configuration will be surfaced as status within the GatewayClass
resource.
You can also validate this configuration using egctl translate.
2.4 - Deployment Mode
One GatewayClass per Envoy Gateway
- Envoy Gateway can accept a single GatewayClass
resource. If you’ve instantiated multiple GatewayClasses, we recommend running multiple Envoy Gateway controllers
in different namespaces, linking a GatewayClass to each of them.
- Support for accepting multiple GatewayClass is being tracked here.
Supported Modes
Kubernetes
- The default deployment model is - Envoy Gateway watches for resources such a
Service
& HTTPRoute
in all namespaces
and creates managed data plane resources such as EnvoyProxy Deployment
in the namespace where Envoy Gateway is running. - Envoy Gateway also supports Namespaced deployment mode, you can watch resources in the specific namespaces by assigning
EnvoyGateway.provider.kubernetes.watch.namespaces
and creates managed data plane resources in the namespace where Envoy Gateway is running. - Support for alternate deployment modes is being tracked here.
Multi-tenancy
Kubernetes
A tenant
is a group within an organization (e.g. a team or department) who shares organizational resources. We recommend
each tenant
deploy their own Envoy Gateway controller in their respective namespace
. Below is an example of deploying Envoy Gateway
by the marketing
and product
teams in separate namespaces.
Lets deploy Envoy Gateway in the marketing
namespace and also watch resources only in this namespace. We are also setting the controller name to a unique string here gateway.envoyproxy.io/marketing-gatewayclass-controller
.
helm install --set config.envoyGateway.gateway.controllerName=gateway.envoyproxy.io/marketing-gatewayclass-controller --set config.envoyGateway.provider.kubernetes.watch.namespaces={marketing} eg-marketing oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n marketing --create-namespace
Lets create a GatewayClass
linked to the marketing team’s Envoy Gateway controller, and as well other resources linked to it, so the backend
application operated by this team can be exposed to external clients.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg-marketing
spec:
controllerName: gateway.envoyproxy.io/marketing-gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: marketing
spec:
gatewayClassName: eg-marketing
listeners:
- name: http
protocol: HTTP
port: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
namespace: marketing
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: marketing
labels:
app: backend
service: backend
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: marketing
spec:
replicas: 1
selector:
matchLabels:
app: backend
version: v1
template:
metadata:
labels:
app: backend
version: v1
spec:
serviceAccountName: backend
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
imagePullPolicy: IfNotPresent
name: backend
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: marketing
spec:
parentRefs:
- name: eg
hostnames:
- "www.marketing.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
Lets port forward to the generated envoy proxy service in the marketing
namespace and send a request to it.
export ENVOY_SERVICE=$(kubectl get svc -n marketing --selector=gateway.envoyproxy.io/owning-gateway-namespace=marketing,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
kubectl -n marketing port-forward service/${ENVOY_SERVICE} 8888:8080 &
curl --verbose --header "Host: www.marketing.example.com" http://localhost:8888/get
* Trying 127.0.0.1:8888...
* Connected to localhost (127.0.0.1) port 8888 (#0)
> GET /get HTTP/1.1
> Host: www.marketing.example.com
> User-Agent: curl/7.86.0
> Accept: */*
>
Handling connection for 8888
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Thu, 20 Apr 2023 19:19:42 GMT
< content-length: 521
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/get",
"host": "www.marketing.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.86.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"10.1.0.157"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"c637977c-458a-48ae-92b3-f8c429849322"
]
},
"namespace": "marketing",
"ingress": "",
"service": "",
"pod": "backend-74888f465f-bcs8f"
* Connection #0 to host localhost left intact
- Lets deploy Envoy Gateway in the
product
namespace and also watch resources only in this namespace.
helm install --set config.envoyGateway.gateway.controllerName=gateway.envoyproxy.io/product-gatewayclass-controller --set config.envoyGateway.provider.kubernetes.watch.namespaces={product} eg-product oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n product --create-namespace
Lets create a GatewayClass
linked to the product team’s Envoy Gateway controller, and as well other resources linked to it, so the backend
application operated by this team can be exposed to external clients.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg-product
spec:
controllerName: gateway.envoyproxy.io/product-gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: product
spec:
gatewayClassName: eg-product
listeners:
- name: http
protocol: HTTP
port: 8080
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
namespace: product
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: product
labels:
app: backend
service: backend
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
namespace: product
spec:
replicas: 1
selector:
matchLabels:
app: backend
version: v1
template:
metadata:
labels:
app: backend
version: v1
spec:
serviceAccountName: backend
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
imagePullPolicy: IfNotPresent
name: backend
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: product
spec:
parentRefs:
- name: eg
hostnames:
- "www.product.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
Lets port forward to the generated envoy proxy service in the product
namespace and send a request to it.
export ENVOY_SERVICE=$(kubectl get svc -n product --selector=gateway.envoyproxy.io/owning-gateway-namespace=product,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
kubectl -n product port-forward service/${ENVOY_SERVICE} 8889:8080 &
curl --verbose --header "Host: www.product.example.com" http://localhost:8889/get
* Trying 127.0.0.1:8889...
* Connected to localhost (127.0.0.1) port 8889 (#0)
> GET /get HTTP/1.1
> Host: www.product.example.com
> User-Agent: curl/7.86.0
> Accept: */*
>
Handling connection for 8889
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Thu, 20 Apr 2023 19:20:17 GMT
< content-length: 517
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/get",
"host": "www.product.example.com",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.86.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Internal": [
"true"
],
"X-Forwarded-For": [
"10.1.0.156"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"39196453-2250-4331-b756-54003b2853c2"
]
},
"namespace": "product",
"ingress": "",
"service": "",
"pod": "backend-74888f465f-64fjs"
* Connection #0 to host localhost left intact
With the below command you can ensure that you are not able to access the marketing team’s backend exposed using the www.marketing.example.com
hostname
and the product team’s data plane.
curl --verbose --header "Host: www.marketing.example.com" http://localhost:8889/get
* Trying 127.0.0.1:8889...
* Connected to localhost (127.0.0.1) port 8889 (#0)
> GET /get HTTP/1.1
> Host: www.marketing.example.com
> User-Agent: curl/7.86.0
> Accept: */*
>
Handling connection for 8889
* Mark bundle as not supporting multiuse
< HTTP/1.1 404 Not Found
< date: Thu, 20 Apr 2023 19:22:13 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host localhost left intact
2.5 - Envoy Patch Policy
This guide explains the usage of the EnvoyPatchPolicy API.
Note: This API is meant for users extremely familiar with Envoy xDS semantics.
Also before considering this API for production use cases, please be aware that this API
is unstable and the outcome may change across versions. Use at your own risk.
Introduction
The EnvoyPatchPolicy API allows user to modify the output xDS
configuration generated by Envoy Gateway intended for EnvoyProxy,
using JSON Patch semantics.
Motivation
This API was introduced to allow advanced users to be able to leverage Envoy Proxy functionality
not exposed by Envoy Gateway APIs today.
Quickstart
Prerequisites
- Follow the steps from the Quickstart guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Enable EnvoyPatchPolicy
By default EnvoyPatchPolicy is disabled. Lets enable it in the EnvoyGateway startup configuration
The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it
using a ConfigMap
. In the next step, we will update this resource to enable EnvoyPatchPolicy.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
extensionApis:
enableEnvoyPatchPolicy: true
EOF
- After updating the
ConfigMap
, you will need to restart the envoy-gateway
deployment so the configuration kicks in
kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
Testing
Customize Response
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
name: custom-response-patch-policy
namespace: default
spec:
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
type: JSONPatch
jsonPatches:
- type: "type.googleapis.com/envoy.config.listener.v3.Listener"
# The listener name is of the form <GatewayNamespace>/<GatewayName>/<GatewayListenerName>
name: default/eg/http
operation:
op: add
path: "/default_filter_chain/filters/0/typed_config/local_reply_config"
value:
mappers:
- filter:
status_code_filter:
comparison:
op: EQ
value:
default_value: 404
runtime_key: key_b
status_code: 406
body:
inline_string: "could not find what you are looking for"
EOF
- Lets edit the HTTPRoute resource from the Quickstart to only match on paths with value
/get
kubectl patch httproute backend --type=json --patch '[{
"op": "add",
"path": "/spec/rules/0/matches/0/path/value",
"value": "/get",
}]'
- Lets test it out by specifying a path apart from
/get
$ curl --header "Host: www.example.com" http://localhost:8888/find
Handling connection for 8888
could not find what you are looking for
Debugging
Runtime
- The
Status
subresource should have information about the status of the resource. Make sure
Accepted=True
and Programmed=True
conditions are set to ensure that the policy has been
applied to Envoy Proxy.
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyPatchPolicy
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"gateway.envoyproxy.io/v1alpha1","kind":"EnvoyPatchPolicy","metadata":{"annotations":{},"name":"custom-response-patch-policy","namespace":"default"},"spec":{"jsonPatches":[{"name":"default/eg/http","operation":{"op":"add","path":"/default_filter_chain/filters/0/typed_config/local_reply_config","value":{"mappers":[{"body":{"inline_string":"could not find what you are looking for"},"filter":{"status_code_filter":{"comparison":{"op":"EQ","value":{"default_value":404}}}}}]}},"type":"type.googleapis.com/envoy.config.listener.v3.Listener"}],"priority":0,"targetRef":{"group":"gateway.networking.k8s.io","kind":"Gateway","name":"eg","namespace":"default"},"type":"JSONPatch"}}
creationTimestamp: "2023-07-31T21:47:53Z"
generation: 1
name: custom-response-patch-policy
namespace: default
resourceVersion: "10265"
uid: a35bda6e-a0cc-46d7-a63a-cee765174bc3
spec:
jsonPatches:
- name: default/eg/http
operation:
op: add
path: /default_filter_chain/filters/0/typed_config/local_reply_config
value:
mappers:
- body:
inline_string: could not find what you are looking for
filter:
status_code_filter:
comparison:
op: EQ
value:
default_value: 404
type: type.googleapis.com/envoy.config.listener.v3.Listener
priority: 0
targetRef:
group: gateway.networking.k8s.io
kind: Gateway
name: eg
namespace: default
type: JSONPatch
status:
conditions:
- lastTransitionTime: "2023-07-31T21:48:19Z"
message: EnvoyPatchPolicy has been accepted.
observedGeneration: 1
reason: Accepted
status: "True"
type: Accepted
- lastTransitionTime: "2023-07-31T21:48:19Z"
message: successfully applied patches.
reason: Programmed
status: "True"
type: Programmed
Offline
Caveats
This API will always be an unstable API and the same outcome cannot be guaranteed
across versions for these reasons
- The Envoy Proxy API might deprecate and remove API fields
- Envoy Gateway might alter the xDS translation creating a different xDS output
such as changing the
name
field of resources.
2.6 - Gateway Address
The Gateway API provides an optional Addresses field through which Envoy Gateway can set addresses for Envoy Proxy Service. The currently supported addresses are:
Installation
Install Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
External IPs
Using the addresses in Gateway.Spec.Addresses
as the External IPs of Envoy Proxy Service, this will require the address to be of type IPAddress
.
Install the GatewayClass, Gateway from quickstart:
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/v0.6.0/quickstart.yaml -n default
Set the address of the Gateway, the address settings here are for reference only:
kubectl patch gateway eg --type=json --patch '[{
"op": "add",
"path": "/spec/addresses",
"value": [{
"type": "IPAddress",
"value": "1.2.3.4"
}]
}]'
Verify the Gateway status:
kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
eg eg 1.2.3.4 True 14m
Verify the Envoy Proxy Service status:
kubectl get service -n envoy-gateway-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
envoy-default-eg-64656661 LoadBalancer 10.96.236.219 1.2.3.4 80:31017/TCP 15m
envoy-gateway ClusterIP 10.96.192.76 <none> 18000/TCP 15m
envoy-gateway-metrics-service ClusterIP 10.96.124.73 <none> 8443/TCP 15m
Note: If the Gateway.Spec.Addresses
is explicitly set, it will be the only addresses that populates the Gateway status.
2.7 - Gateway API Metrics
Resource metrics for Gateway API objects are available using the Gateway API State Metrics project.
The project also provides example dashboard for visualising the metrics using Grafana, and example alerts using Prometheus & Alertmanager.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Run the following commands to install the metrics stack, with the Gateway API State Metrics configuration, on your kubernetes cluster:
kubectl apply --server-side -f https://raw.githubusercontent.com/Kuadrant/gateway-api-state-metrics/main/config/examples/kube-prometheus/bundle_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/Kuadrant/gateway-api-state-metrics/main/config/examples/kube-prometheus/bundle.yaml
Metrics and Alerts
To access the Prometheus UI, wait for the statefulset to be ready, then use the port-forward command:
# This first command may fail if the statefulset has not been created yet.
# In that case, try again until you get a message like 'Waiting for 2 pods to be ready...'
# or 'statefulset rolling update complete 2 pods...'
kubectl -n monitoring rollout status --watch --timeout=5m statefulset/prometheus-k8s
kubectl -n monitoring port-forward service/prometheus-k8s 9090:9090 > /dev/null &
Navigate to http://localhost:9090
.
Metrics can be queried from the ‘Graph’ tab e.g. gatewayapi_gateway_created
See the Gateway API State Metrics README for the full list of Gateway API metrics available.
Alerts can be seen in the ‘Alerts’ tab.
Gateway API specific alerts will be grouped under the ‘gateway-api.rules’ heading.
Note: Alerts are defined in a PrometheusRules custom resource in the ‘monitoring’ namespace. You can modify the alert rules by updating this resource.
Dashboards
To view the dashboards in Grafana, wait for the deployment to be ready, then use the port-forward command:
kubectl -n monitoring wait --timeout=5m deployment/grafana --for=condition=Available
kubectl -n monitoring port-forward service/grafana 3000:3000 > /dev/null &
Navigate to http://localhost:3000
and sign in with admin/admin.
The Gateway API State dashboards will be available in the ‘Default’ folder and tagged with ‘gateway-api’.
See the Gateway API State Metrics README for further information on available dashboards.
Note: Dashboards are loaded from configmaps. You can modify the dashboards in the Grafana UI, however you will need to export them from the UI and update the json in the configmaps to persist changes.
2.8 - Gateway API Support
As mentioned in the system design document, Envoy Gateway’s managed data plane is configured dynamically through
Kubernetes resources, primarily Gateway API objects. Envoy Gateway supports configuration using the following Gateway API resources.
GatewayClass
A GatewayClass represents a “class” of gateways, i.e. which Gateways should be managed by Envoy Gateway.
Envoy Gateway supports managing a single GatewayClass resource that matches its configured controllerName
and
follows Gateway API guidelines for resolving conflicts when multiple GatewayClasses exist with a matching
controllerName
.
Note: If specifying GatewayClass parameters reference, it must refer to an EnvoyProxy resource.
Gateway
When a Gateway resource is created that references the managed GatewayClass, Envoy Gateway will create and manage a
new Envoy Proxy deployment. Gateway API resources that reference this Gateway will configure this managed Envoy Proxy
deployment.
HTTPRoute
An HTTPRoute configures routing of HTTP traffic through one or more Gateways. The following HTTPRoute filters are
supported by Envoy Gateway:
requestHeaderModifier
: RequestHeaderModifiers
can be used to modify or add request headers before the request is proxied to its destination.responseHeaderModifier
: ResponseHeaderModifiers
can be used to modify or add response headers before the response is sent back to the client.requestMirror
: RequestMirrors
configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.requestRedirect
: RequestRedirects
configure policied for how requests that match the HTTPRoute should be modified and then redirected.urlRewrite
: UrlRewrites
allow for modification of the request’s hostname and path before it is proxied to its destination.extensionRef
: ExtensionRefs are used by Envoy Gateway to implement extended filters. Currently, Envoy Gateway
supports rate limiting and request authentication filters. For more information about these filters, refer to the
rate limiting and request authentication documentation.
Notes:
- The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other destinations such
as arbitrary URLs is not possible.
- The
filters
field within HTTPBackendRef is not supported.
TCPRoute
A TCPRoute configures routing of raw TCP traffic through one or more Gateways. Traffic can be forwarded to the
desired BackendRefs based on a TCP port number.
Note: A TCPRoute only supports proxying in non-transparent mode, i.e. the backend will see the source IP and port of
the Envoy Proxy instance instead of the client.
UDPRoute
A UDPRoute configures routing of raw UDP traffic through one or more Gateways. Traffic can be forwarded to the
desired BackendRefs based on a UDP port number.
Note: Similar to TCPRoutes, UDPRoutes only support proxying in non-transparent mode i.e. the backend will see the
source IP and port of the Envoy Proxy instance instead of the client.
GRPCRoute
A GRPCRoute configures routing of gRPC requests through one or more Gateways. They offer request matching by
hostname, gRPC service, gRPC method, or HTTP/2 Header. Envoy Gateway supports the following filters on GRPCRoutes to
provide additional traffic processing:
requestHeaderModifier
: RequestHeaderModifiers
can be used to modify or add request headers before the request is proxied to its destination.responseHeaderModifier
: ResponseHeaderModifiers
can be used to modify or add response headers before the response is sent back to the client.requestMirror
: RequestMirrors
configure destinations where the requests should also be mirrored to. Responses to mirrored requests will be ignored.
Notes:
- The only BackendRef kind supported by Envoy Gateway is a Service. Routing traffic to other
destinations such as arbitrary URLs is not currently possible.
- The
filters
field within HTTPBackendRef is not supported.
TLSRoute
A TLSRoute configures routing of TCP traffic through one or more Gateways. However, unlike TCPRoutes, TLSRoutes
can match against TLS-specific metadata.
ReferenceGrant
A ReferenceGrant is used to allow a resource to reference another resource in a different namespace. Normally an
HTTPRoute created in namespace foo
is not allowed to reference a Service in namespace bar
. A ReferenceGrant permits
these types of cross-namespace references. Envoy Gateway supports the following ReferenceGrant use-cases:
- Allowing an HTTPRoute, GRPCRoute, TLSRoute, UDPRoute, or TCPRoute to reference a Service in a different namespace.
- Allowing an HTTPRoute’s
requestMirror
filter to include a BackendRef that references a Service in a different
namespace. - Allowing a Gateway’s SecretObjectReference to reference a secret in a different namespace.
2.9 - GRPC Routing
The GRPCRoute resource allows users to configure gRPC routing by matching HTTP/2 traffic and forwarding it to backend gRPC servers.
To learn more about gRPC routing, refer to the Gateway API documentation.
Prerequisites
Install Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
Installation
Install the gRPC routing example resources:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/grpc-routing.yaml
The manifest installs a GatewayClass, Gateway, a Deployment, a Service, and a GRPCRoute resource.
The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.
Note: Envoy Gateway is configured by default to manage a GatewayClass with
controllerName: gateway.envoyproxy.io/gatewayclass-controller
.
Verification
Check the status of the GatewayClass:
kubectl get gc --selector=example=grpc-routing
The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.
A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is
provisioned or configured by Envoy Gateway. The gatewayClassName
defines the name of a GatewayClass used by this
Gateway. Check the status of the Gateway:
kubectl get gateways --selector=example=grpc-routing
The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also
provides the address of the Gateway. This address is used later in the guide to test connectivity to proxied backend
services.
Check the status of the GRPCRoute:
kubectl get grpcroutes --selector=example=grpc-routing -o yaml
The status for the GRPCRoute should surface “Accepted=True” and a parentRef
that references the example Gateway.
The example-route
matches any traffic for “grpc-example.com” and forwards it to the “yages” Service.
Testing the Configuration
Before testing GRPC routing to the yages
backend, get the Gateway’s address.
export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')
Test GRPC routing to the yages
backend using the grpcurl command.
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
You should see the below response
Envoy Gateway also supports gRPC-Web requests for this configuration. The below curl
command can be used to send a grpc-Web request with over HTTP/2. You should receive the same response seen in the previous command.
The data in the body AAAAAAA=
is a base64 encoded representation of an empty message (data length 0) that the Ping RPC accepts.
curl --http2-prior-knowledge -s ${GATEWAY_HOST}:80/yages.Echo/Ping -H 'Host: grpc-example.com' -H 'Content-Type: application/grpc-web-text' -H 'Accept: application/grpc-web-text' -XPOST -d'AAAAAAA=' | base64 -d
GRPCRoute Match
The matches
field can be used to restrict the route to a specific set of requests based on GRPC’s service and/or method names.
It supports two match types: Exact
and RegularExpression
.
Exact
Exact
match is the default match type.
The following example shows how to match a request based on the service and method names for grpc.reflection.v1alpha.ServerReflection/ServerReflectionInfo
,
as well as a match for all services with a method name Ping
which matches yages.Echo/Ping
in our deployment.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- matches:
- method:
method: ServerReflectionInfo
service: grpc.reflection.v1alpha.ServerReflection
- method:
method: Ping
backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
EOF
Verify the GRPCRoute status:
kubectl get grpcroutes --selector=example=grpc-routing -o yaml
Test GRPC routing to the yages
backend using the grpcurl command.
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
RegularExpression
The following example shows how to match a request based on the service and method names
with match type RegularExpression
. It matches all the services and methods with pattern
/.*.Echo/Pin.+
, which matches yages.Echo/Ping
in our deployment.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: yages
labels:
example: grpc-routing
spec:
parentRefs:
- name: example-gateway
hostnames:
- "grpc-example.com"
rules:
- matches:
- method:
method: ServerReflectionInfo
service: grpc.reflection.v1alpha.ServerReflection
- method:
method: "Pin.+"
service: ".*.Echo"
type: RegularExpression
backendRefs:
- group: ""
kind: Service
name: yages
port: 9000
weight: 1
EOF
Verify the GRPCRoute status:
kubectl get grpcroutes --selector=example=grpc-routing -o yaml
Test GRPC routing to the yages
backend using the grpcurl command.
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
2.10 - HTTP Redirects
The HTTPRoute resource can issue redirects to clients or rewrite paths sent upstream using filters. Note that
HTTPRoute rules cannot use both filter types at once. Currently, Envoy Gateway only supports core
HTTPRoute filters which consist of RequestRedirect
and RequestHeaderModifier
at the time of this writing. To
learn more about HTTP routing, refer to the Gateway API documentation.
Prerequisites
Follow the steps from the Quickstart to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTPS.
Redirects
Redirects return HTTP 3XX responses to a client, instructing it to retrieve a different resource. A
RequestRedirect
filter instructs Gateways to emit a redirect response to requests that match the rule.
For example, to issue a permanent redirect (301) from HTTP to HTTPS, configure requestRedirect.statusCode=301
and
requestRedirect.scheme="https"
:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-to-https-filter-redirect
spec:
parentRefs:
- name: eg
hostnames:
- redirect.example
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
statusCode: 301
hostname: www.example.com
port: 443
backendRefs:
- name: backend
port: 3000
EOF
Note: 301
(default) and 302
are the only supported statusCodes.
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-to-https-filter-redirect -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying redirect.example/get
should result in a 301
response from the example Gateway and redirecting to the
configured redirect hostname.
$ curl -L -vvv --header "Host: redirect.example" "http://${GATEWAY_HOST}/get"
...
< HTTP/1.1 301 Moved Permanently
< location: https://www.example.com/get
...
If you followed the steps in the Secure Gateways guide, you should be able to curl the redirect
location.
HTTP –> HTTPS
Listeners expose the TLS setting on a per domain or subdomain basis. TLS settings of a listener are applied to all domains that satisfy the hostname criteria.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/CN=example.com' -keyout CA.key -out CA.crt
openssl req -out example.com.csr -newkey rsa:2048 -nodes -keyout tls.key -subj "/CN=example.com"
Generate a self-signed wildcard certificate for example.com
with *.example.com
extension
cat <<EOF | openssl x509 -req -days 365 -CA CA.crt -CAkey CA.key -set_serial 0 \
-subj "/CN=example.com" \
-in example.com.csr -out tls.crt -extensions v3_req -extfile -
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = example.com
DNS.2 = *.example.com
EOF
Create the kubernetes tls secret
kubectl create secret tls example-com --key=tls.key --cert=tls.crt
Define a https listener on the existing gateway
cat <<EOF | kubectl apply -n default -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: http
port: 80
protocol: HTTP
# hostname: "*.example.com"
- name: https
port: 443
protocol: HTTPS
# hostname: "*.example.com"
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: example-com
EOF
Check for any TLS certificate issues on the gateway.
kubectl -n default describe gateway eg
Create two HTTPRoutes and attach them to the HTTP and HTTPS listeners using the sectionName field.
cat <<EOF | kubectl apply -n default -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tls-redirect
spec:
parentRefs:
- name: eg
sectionName: http
hostnames:
# - "*.example.com" # catch all hostnames
- "www.example.com"
rules:
- filters:
- type: RequestRedirect
requestRedirect:
scheme: https
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
parentRefs:
- name: eg
sectionName: https
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
Curl the example app through http listener:
curl --verbose --header "Host: www.example.com" http://$GATEWAY_HOST/get
Curl the example app through https listener:
curl -v -H 'Host:www.example.com' --resolve "www.example.com:443:$GATEWAY_HOST" \
--cacert CA.crt https://www.example.com:443/get
Path Redirects
Path redirects use an HTTP Path Modifier to replace either entire paths or path prefixes. For example, the HTTPRoute
below will issue a 302 redirect to all path.redirect.example
requests whose path begins with /get
to /status/200
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-path-redirect
spec:
parentRefs:
- name: eg
hostnames:
- path.redirect.example
rules:
- matches:
- path:
type: PathPrefix
value: /get
filters:
- type: RequestRedirect
requestRedirect:
path:
type: ReplaceFullPath
replaceFullPath: /status/200
statusCode: 302
backendRefs:
- name: backend
port: 3000
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-path-redirect -o yaml
Querying path.redirect.example
should result in a 302
response from the example Gateway and a redirect location
containing the configured redirect path.
Query the path.redirect.example
host:
curl -vvv --header "Host: path.redirect.example" "http://${GATEWAY_HOST}/get"
You should receive a 302
with a redirect location of http://path.redirect.example/status/200
.
2.11 - HTTP Request Headers
The HTTPRoute resource can modify the headers of a request before forwarding it to the upstream service. HTTPRoute
rules cannot use both filter types at once. Currently, Envoy Gateway only supports core HTTPRoute filters which
consist of RequestRedirect
and RequestHeaderModifier
at the time of this writing. To learn more about HTTP routing,
refer to the Gateway API documentation.
A RequestHeaderModifier
filter instructs Gateways to modify the headers in requests that match the rule
before forwarding the request upstream. Note that the RequestHeaderModifier
filter will only modify headers before the
request is sent from Envoy to the upstream service and will not affect response headers returned to the downstream
client.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
The RequestHeaderModifier
filter can add new headers to a request before it is sent to the upstream. If the request
does not have the header configured by the filter, then that header will be added to the request. If the request already
has the header configured by the filter, then the value of the header in the filter will be appended to the value of the
header in the request.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: "add-header"
value: "foo"
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-headers -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the upstream example app received the header add-header
with the value:
something,foo
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"headers": {
"Accept": [
"*/*"
],
"Add-Header": [
"something",
"foo"
],
...
Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it
will be added, but unlike adding request headers which will append the value of the header if
the request already contains it, setting a header will cause the value to be replaced by the value configured in the
filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
set:
- name: "set-header"
value: "foo"
EOF
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the upstream example app received the header add-header
with the original value
something
replaced by foo
.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "set-header: something"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
"headers": {
"Accept": [
"*/*"
],
"Set-Header": [
"foo"
],
...
Headers can be removed from a request by simply supplying a list of header names.
Setting headers is similar to adding headers. If the request does not have the header configured by the filter, then it
will be added, but unlike adding request headers which will append the value of the header if
the request already contains it, setting a header will cause the value to be replaced by the value configured in the
filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
remove:
- "remove-header"
EOF
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the upstream example app received the header add-header
, but the header
remove-header
that was sent by curl was removed before the upstream received the request.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" --header "add-header: something" --header "remove-header: foo"
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
"headers": {
"Accept": [
"*/*"
],
"Add-Header": [
"something"
],
...
Combining Filters
Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: "add-header-1"
value: "foo"
set:
- name: "set-header-1"
value: "bar"
remove:
- "removed-header"
EOF
2.12 - HTTP Response Headers
The HTTPRoute resource can modify the headers of a response before responding it to the downstream service. To learn
more about HTTP routing, refer to the Gateway API documentation.
A ResponseHeaderModifier
filter instructs Gateways to modify the headers in responses that match the
rule before responding to the downstream. Note that the ResponseHeaderModifier
filter will only modify headers before
the response is returned from Envoy to the downstream client and will not affect request headers forwarding to the
upstream service.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
The ResponseHeaderModifier
filter can add new headers to a response before it is sent to the upstream. If the response
does not have the header configured by the filter, then that header will be added to the response. If the response
already has the header configured by the filter, then the value of the header in the filter will be appended to the
value of the header in the response.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
add:
- name: "add-header"
value: "foo"
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-headers -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the downstream client received the header add-header
with the value: foo
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: X-Foo: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: X-Foo: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< x-foo: value1
< add-header: foo
<
...
"headers": {
"Accept": [
"*/*"
],
"X-Echo-Set-Header": [
"X-Foo: value1"
]
...
Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it
will be added, but unlike adding response headers which will append the value of the header
if the response already contains it, setting a header will cause the value to be replaced by the value configured in the
filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
set:
- name: "set-header"
value: "foo"
EOF
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the downstream client received the header set-header
with the original value value1
replaced by foo
.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: set-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: set-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
< set-header: foo
<
"headers": {
"Accept": [
"*/*"
],
"X-Echo-Set-Header": [
"set-header": value1"
]
...
Headers can be removed from a response by simply supplying a list of header names.
Setting headers is similar to adding headers. If the response does not have the header configured by the filter, then it
will be added, but unlike adding response headers which will append the value of the header
if the response already contains it, setting a header will cause the value to be replaced by the value configured in the
filter.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
remove:
- "remove-header"
EOF
Querying headers.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate that the header remove-header
that was sent by curl was removed before the upstream
received the response.
$ curl -vvv --header "Host: headers.example" "http://${GATEWAY_HOST}/get" -H 'X-Echo-Set-Header: remove-header: value1'
...
> GET /get HTTP/1.1
> Host: headers.example
> User-Agent: curl/7.81.0
> Accept: */*
> X-Echo-Set-Header: remove-header: value1
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
"headers": {
"Accept": [
"*/*"
],
"X-Echo-Set-Header": [
"remove-header": value1"
]
...
Combining Filters
Headers can be added/set/removed in a single filter on the same HTTPRoute and they will all perform as expected
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- headers.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
filters:
- type: ResponseHeaderModifier
responseHeaderModifier:
add:
- name: "add-header-1"
value: "foo"
set:
- name: "set-header-1"
value: "bar"
remove:
- "removed-header"
EOF
2.13 - HTTP Routing
The HTTPRoute resource allows users to configure HTTP routing by matching HTTP traffic and forwarding it to
Kubernetes backends. Currently, the only supported backend supported by Envoy Gateway is a Service resource. This guide
shows how to route traffic based on host, header, and path fields and forward the traffic to different Kubernetes
Services. To learn more about HTTP routing, refer to the Gateway API documentation.
Prerequisites
Install Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
Installation
Install the HTTP routing example resources:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/http-routing.yaml
The manifest installs a GatewayClass, Gateway, four Deployments, four Services, and three HTTPRoute resources.
The GatewayClass is a cluster-scoped resource that represents a class of Gateways that can be instantiated.
Note: Envoy Gateway is configured by default to manage a GatewayClass with
controllerName: gateway.envoyproxy.io/gatewayclass-controller
.
Verification
Check the status of the GatewayClass:
kubectl get gc --selector=example=http-routing
The status should reflect “Accepted=True”, indicating Envoy Gateway is managing the GatewayClass.
A Gateway represents configuration of infrastructure. When a Gateway is created, Envoy proxy infrastructure is
provisioned or configured by Envoy Gateway. The gatewayClassName
defines the name of a GatewayClass used by this
Gateway. Check the status of the Gateway:
kubectl get gateways --selector=example=http-routing
The status should reflect “Ready=True”, indicating the Envoy proxy infrastructure has been provisioned. The status also
provides the address of the Gateway. This address is used later in the guide to test connectivity to proxied backend
services.
The three HTTPRoute resources create routing rules on the Gateway. In order to receive traffic from a Gateway,
an HTTPRoute must be configured with parentRefs
which reference the parent Gateway(s) that it should be attached to.
An HTTPRoute can match against a single set of hostnames. These hostnames are matched before any other matching
within the HTTPRoute takes place. Since example.com
, foo.example.com
, and bar.example.com
are separate hosts with
different routing requirements, each is deployed as its own HTTPRoute - example-route, ``foo-route
, and bar-route
.
Check the status of the HTTPRoutes:
kubectl get httproutes --selector=example=http-routing -o yaml
The status for each HTTPRoute should surface “Accepted=True” and a parentRef
that references the example Gateway.
The example-route
matches any traffic for “example.com” and forwards it to the “example-svc” Service.
Testing the Configuration
Before testing HTTP routing to the example-svc
backend, get the Gateway’s address.
export GATEWAY_HOST=$(kubectl get gateway/example-gateway -o jsonpath='{.status.addresses[0].value}')
Test HTTP routing to the example-svc
backend.
curl -vvv --header "Host: example.com" "http://${GATEWAY_HOST}/"
A 200
status code should be returned and the body should include "pod": "example-backend-*"
indicating the traffic
was routed to the example backend service. If you change the hostname to a hostname not represented in any of the
HTTPRoutes, e.g. “www.example.com”, the HTTP traffic will not be routed and a 404
should be returned.
The foo-route
matches any traffic for foo.example.com
and applies its routing rules to forward the traffic to the
“foo-svc” Service. Since there is only one path prefix match for /login
, only foo.example.com/login/*
traffic will
be forwarded. Test HTTP routing to the foo-svc
backend.
curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/login"
A 200
status code should be returned and the body should include "pod": "foo-backend-*"
indicating the traffic
was routed to the foo backend service. Traffic to any other paths that do not begin with /login
will not be matched by
this HTTPRoute. Test this by removing /login
from the request.
curl -vvv --header "Host: foo.example.com" "http://${GATEWAY_HOST}/"
The HTTP traffic will not be routed and a 404
should be returned.
Similarly, the bar-route
HTTPRoute matches traffic for bar.example.com
. All traffic for this hostname will be
evaluated against the routing rules. The most specific match will take precedence which means that any traffic with the
env:canary
header will be forwarded to bar-svc-canary
and if the header is missing or not canary
then it’ll be
forwarded to bar-svc
. Test HTTP routing to the bar-svc
backend.
curl -vvv --header "Host: bar.example.com" "http://${GATEWAY_HOST}/"
A 200
status code should be returned and the body should include "pod": "bar-backend-*"
indicating the traffic
was routed to the foo backend service.
Test HTTP routing to the bar-canary-svc
backend by adding the env: canary
header to the request.
curl -vvv --header "Host: bar.example.com" --header "env: canary" "http://${GATEWAY_HOST}/"
A 200
status code should be returned and the body should include "pod": "bar-canary-backend-*"
indicating the
traffic was routed to the foo backend service.
2.14 - HTTP URL Rewrite
HTTPURLRewriteFilter defines a filter that modifies a request during forwarding. At most one of these filters may be
used on a Route rule. This MUST NOT be used on the same Route rule as a HTTPRequestRedirect filter.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Rewrite URL Prefix Path
You can configure to rewrite the prefix in the url like below. In this example, any curls to
http://${GATEWAY_HOST}/get/xxx
will be rewritten to http://${GATEWAY_HOST}/replace/xxx
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
value: "/get"
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /replace
backendRefs:
- name: backend
port: 3000
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-url-rewrite -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying http://${GATEWAY_HOST}/get/origin/path
should rewrite to
http://${GATEWAY_HOST}/replace/origin/path
.
$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path"
...
> GET /get/origin/path HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:03:28 GMT
< content-length: 503
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/replace/origin/path",
"host": "path.rewrite.example",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Original-Path": [
"/get/origin/path"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"fd84b842-9937-4fb5-83c7-61470d854b90"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-6fdd4b9bd8-8vlc5"
...
You can see that the X-Envoy-Original-Path
is /get/origin/path
, but the actual path is /replace/origin/path
.
Rewrite URL Full Path
You can configure to rewrite the fullpath in the url like below. In this example, any request sent to
http://${GATEWAY_HOST}/get/origin/path/xxxx
will be rewritten to
http://${GATEWAY_HOST}/force/replace/fullpath
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/get/origin/path"
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /force/replace/fullpath
backendRefs:
- name: backend
port: 3000
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-url-rewrite -o yaml
Querying http://${GATEWAY_HOST}/get/origin/path/extra
should rewrite the request to
http://${GATEWAY_HOST}/force/replace/fullpath
.
$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get/origin/path/extra"
...
> GET /get/origin/path/extra HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:09:31 GMT
< content-length: 512
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/force/replace/fullpath",
"host": "path.rewrite.example",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Envoy-Original-Path": [
"/get/origin/path/extra"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"8ab774d6-9ffa-4faa-abbb-f45b0db00895"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-6fdd4b9bd8-8vlc5"
...
You can see that the X-Envoy-Original-Path
is /get/origin/path/extra
, but the actual path is
/force/replace/fullpath
.
Rewrite Host Name
You can configure to rewrite the hostname like below. In this example, any requests sent to
http://${GATEWAY_HOST}/get
with --header "Host: path.rewrite.example"
will rewrite host into envoygateway.io
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-filter-url-rewrite
spec:
parentRefs:
- name: eg
hostnames:
- path.rewrite.example
rules:
- matches:
- path:
type: PathPrefix
value: "/get"
filters:
- type: URLRewrite
urlRewrite:
hostname: "envoygateway.io"
backendRefs:
- name: backend
port: 3000
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-filter-url-rewrite -o yaml
Querying http://${GATEWAY_HOST}/get
with --header "Host: path.rewrite.example"
will rewrite host into
envoygateway.io
.
$ curl -L -vvv --header "Host: path.rewrite.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: path.rewrite.example
> User-Agent: curl/7.85.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< date: Wed, 21 Dec 2022 11:15:15 GMT
< content-length: 481
< x-envoy-upstream-service-time: 0
< server: envoy
<
{
"path": "/get",
"host": "envoygateway.io",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
],
"X-Envoy-Expected-Rq-Timeout-Ms": [
"15000"
],
"X-Forwarded-Host": [
"path.rewrite.example"
],
"X-Forwarded-Proto": [
"http"
],
"X-Request-Id": [
"39aa447c-97b9-45a3-a675-9fb266ab1af0"
]
},
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-6fdd4b9bd8-8vlc5"
...
You can see that the X-Forwarded-Host
is path.rewrite.example
, but the actual host is envoygateway.io
.
2.15 - HTTPRoute Request Mirroring
The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams. It is possible to divide the traffic between these backends using Traffic Splitting, but it is also possible to mirror requests to another Service instead. Request mirroring is accomplished using Gateway API’s HTTPRequestMirrorFilter on the HTTPRoute
.
When requests are made to a HTTPRoute
that uses a HTTPRequestMirrorFilter
, the response will never come from the backendRef
defined in the filter. Responses from the mirror backendRef
are always ignored.
Installation
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Mirroring the Traffic
Next, create a new Deployment
and Service
to mirror requests to. The following example will use
a second instance of the application deployed in the quickstart.
kubectl apply -f - <<EOF
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend-2
---
apiVersion: v1
kind: Service
metadata:
name: backend-2
labels:
app: backend-2
service: backend-2
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
serviceAccountName: backend-2
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
EOF
Then create an HTTPRoute
that uses a HTTPRequestMirrorFilter
to send requests to the original
service from the quickstart, and mirror request to the service that was just deployed.
kubectl apply -f - <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-mirror -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying backends.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate which pod handled the request. There is only one pod in the deployment for the example app
from the quickstart, so it will be the same on all subsequent requests.
$ curl -v --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-79665566f5-s589f"
...
Check the logs of the pods and you will see that the original deployment and the new deployment each got a request:
$ kubectl logs deploy/backend && kubectl logs deploy/backend-2
...
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:41566)
Starting server, listening on port 3000 (http)
Echoing back request made to /get to client (10.42.0.10:45096)
Multiple BackendRefs
When an HTTPRoute
has multiple backendRefs
and an HTTPRequestMirrorFilter
, traffic splitting will still behave the same as it normally would for the main backendRefs
while the backendRef
of the HTTPRequestMirrorFilter
will continue receiving mirrored copies of the incoming requests.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
- group: ""
kind: Service
name: backend-3
port: 3000
EOF
Multiple HTTPRequestMirrorFilters
Multiple HTTPRequestMirrorFilters
are not supported on the same HTTPRoute
rule
. When attempting to do so, the admission webhook will reject the configuration.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-mirror
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
filters:
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-2
port: 3000
- type: RequestMirror
requestMirror:
backendRef:
kind: Service
name: backend-3
port: 3000
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Error from server: error when creating "STDIN": admission webhook "validate.gateway.networking.k8s.io" denied the request: spec.rules[0].filters: Invalid value: "RequestMirror": cannot be used multiple times in the same rule
2.16 - HTTPRoute Traffic Splitting
The HTTPRoute resource allows one or more backendRefs to be provided. Requests will be routed to these upstreams
if they match the rules of the HTTPRoute. If an invalid backendRef is configured, then HTTP responses will be returned
with status code 500
for all requests that would have been sent to that backend.
Installation
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Single backendRef
When a single backendRef is configured in a HTTPRoute, it will receive 100% of the traffic.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-headers -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Querying backends.example/get
should result in a 200
response from the example Gateway and the output from the
example app should indicate which pod handled the request. There is only one pod in the deployment for the example app
from the quickstart, so it will be the same on all subsequent requests.
$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-79665566f5-s589f"
...
Multiple backendRefs
If multiple backendRefs are configured, then traffic will be split between the backendRefs equally unless a weight is
configured.
First, create a second instance of the example app from the quickstart:
cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend-2
---
apiVersion: v1
kind: Service
metadata:
name: backend-2
labels:
app: backend-2
service: backend-2
spec:
ports:
- name: http
port: 3000
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
serviceAccountName: backend-2
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
EOF
Then create an HTTPRoute that uses both the app from the quickstart and the second instance that was just created
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
- group: ""
kind: Service
name: backend-2
port: 3000
EOF
Querying backends.example/get
should result in 200
responses from the example Gateway and the output from the
example app that indicates which pod handled the request should switch between the first pod and the second one from the
new deployment on subsequent requests.
$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
...
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
> add-header: something
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: application/json
< x-content-type-options: nosniff
< content-length: 474
< x-envoy-upstream-service-time: 0
< server: envoy
<
...
"namespace": "default",
"ingress": "",
"service": "",
"pod": "backend-75bcd4c969-lsxpz"
...
Weighted backendRefs
If multiple backendRefs are configured and an un-even traffic split between the backends is desired, then the weight
field can be used to control the weight of requests to each backend. If weight is not configured for a backendRef it is
assumed to be 1
.
The weight field in a backendRef controls the distribution of the traffic split. The proportion of
requests to a single backendRef is calculated by dividing its weight
by the sum of all backendRef weights in the
HTTPRoute. The weight is not a percentage and the sum of all weights does not need to add up to 100.
The HTTPRoute below will configure the gateway to send 80% of the traffic to the backend service, and 20% to the
backend-2 service.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 8
- group: ""
kind: Service
name: backend-2
port: 3000
weight: 2
EOF
Invalid backendRefs
backendRefs can be considered invalid for the following reasons:
- The
group
field is configured to something other than ""
. Currently, only the core API group (specified by
omitting the group field or setting it to an empty string) is supported - The
kind
field is configured to anything other than Service
. Envoy Gateway currently only supports Kubernetes
Service backendRefs - The backendRef configures a service with a
namespace
not permitted by any existing ReferenceGrants - The
port
field is not configured or is configured to a port that does not exist on the Service - The named Service configured by the backendRef cannot be found
Modifying the above example to make the backend-2 backendRef invalid by using a port that does not exist on the Service
will result in 80% of the traffic being sent to the backend service, and 20% of the traffic receiving an HTTP response
with status code 500
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-headers
spec:
parentRefs:
- name: eg
hostnames:
- backends.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 8
- group: ""
kind: Service
name: backend-2
port: 9000
weight: 2
EOF
Querying backends.example/get
should result in 200
responses 80% of the time, and 500
responses 20% of the time.
$ curl -vvv --header "Host: backends.example" "http://${GATEWAY_HOST}/get"
> GET /get HTTP/1.1
> Host: backends.example
> User-Agent: curl/7.81.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 500 Internal Server Error
< server: envoy
< content-length: 0
<
2.17 - JWT Authentication
This guide provides instructions for configuring JSON Web Token (JWT) authentication. JWT authentication checks
if an incoming request has a valid JWT before routing the request to a backend service. Currently, Envoy Gateway only
supports validating a JWT from an HTTP header, e.g. Authorization: Bearer <token>
.
Envoy Gateway introduces a new CRD called SecurityPolicy that allows the user to configure JWT authentication.
This instantiated resource can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Follow the steps from the Quickstart guide to install Envoy Gateway and the example manifest.
For GRPC - follow the steps from the GRPC Routing example.
Before proceeding, you should be able to query the example backend using HTTP or GRPC.
Configuration
Allow requests with a valid JWT by creating an SecurityPolicy and attaching it to the example
HTTPRoute or GRPCRoute.
HTTPRoute
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/jwt/jwt.yaml
Two HTTPRoute has been created, one for /foo
and another for /bar
. A SecurityPolicy has been created and targeted
HTTPRoute foo to authenticate requests for /foo
. The HTTPRoute bar is not targeted by the SecurityPolicy and will allow
unauthenticated requests to /bar
.
Verify the HTTPRoute configuration and status:
kubectl get httproute/foo -o yaml
kubectl get httproute/bar -o yaml
The SecurityPolicy is configured for JWT authentication and uses a single JSON Web Key Set (JWKS)
provider for authenticating the JWT.
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/jwt-example -o yaml
GRPCRoute
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/jwt/grpc-jwt.yaml
A SecurityPolicy has been created and targeted GRPCRoute yages to authenticate all requests for yages
service..
Verify the GRPCRoute configuration and status:
kubectl get grpcroute/yages -o yaml
The SecurityPolicy is configured for JWT authentication and uses a single JSON Web Key Set (JWKS)
provider for authenticating the JWT.
Verify the SecurityPolicy configuration:
kubectl get securitypolicy/jwt-example -o yaml
Testing
Ensure the GATEWAY_HOST
environment variable from the Quickstart guide is set. If not, follow the
Quickstart instructions to set the variable.
HTTPRoute
Verify that requests to /foo
are denied without a JWT:
curl -sS -o /dev/null -H "Host: www.example.com" -w "%{http_code}\n" http://$GATEWAY_HOST/foo
A 401
HTTP response code should be returned.
Get the JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
Note: The above command decodes and returns the token’s payload. You can replace f2
with f1
to view the token’s
header.
Verify that a request to /foo
with a valid JWT is allowed:
curl -sS -o /dev/null -H "Host: www.example.com" -H "Authorization: Bearer $TOKEN" -w "%{http_code}\n" http://$GATEWAY_HOST/foo
A 200
HTTP response code should be returned.
Verify that requests to /bar
are allowed without a JWT:
curl -sS -o /dev/null -H "Host: www.example.com" -w "%{http_code}\n" http://$GATEWAY_HOST/bar
GRPCRoute
Verify that requests to yages
service are denied without a JWT:
grpcurl -plaintext -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
You should see the below response
Error invoking method "yages.Echo/Ping": rpc error: code = Unauthenticated desc = failed to query for service descriptor "yages.Echo": Jwt is missing
Get the JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
Note: The above command decodes and returns the token’s payload. You can replace f2
with f1
to view the token’s
header.
Verify that a request to yages
service with a valid JWT is allowed:
grpcurl -plaintext -H "authorization: Bearer $TOKEN" -authority=grpc-example.com ${GATEWAY_HOST}:80 yages.Echo/Ping
You should see the below response
Clean-Up
Follow the steps from the Quickstart guide to uninstall Envoy Gateway and the example manifest.
Delete the SecurityPolicy:
kubectl delete securitypolicy/jwt-example
Next Steps
Checkout the Developer Guide to get involved in the project.
2.18 - Multicluster Service Routing
The Multicluster Service API ServiceImport object can be used as part of the GatewayAPI backendRef for configuring routes. For more information about multicluster service API follow sig documentation.
We will use Submariner project for setting up the multicluster environment for exporting the service to be routed from peer clusters.
Setting KIND clusters and installing Submariner.
- We will be using KIND clusters to demonstrate this example.
git clone https://github.com/submariner-io/submariner-operator
cd submariner-operator
make clusters
Note: remain in submariner-operator directory for the rest of the steps in this section
curl -Ls https://get.submariner.io | VERSION=v0.14.6 bash
- Set up multicluster service API and submariner for cross cluster traffic using ServiceImport
subctl deploy-broker --kubeconfig output/kubeconfigs/kind-config-cluster1 --globalnet
subctl join --kubeconfig output/kubeconfigs/kind-config-cluster1 broker-info.subm --clusterid cluster1 --natt=false
subctl join --kubeconfig output/kubeconfigs/kind-config-cluster2 broker-info.subm --clusterid cluster2 --natt=false
Once the above steps are done and all the pods are up in both the clusters. We are ready for installing envoy gateway.
Install EnvoyGateway
Install the Gateway API CRDs and Envoy Gateway in cluster1:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-system --create-namespace --kubeconfig output/kubeconfigs/kind-config-cluster1
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available --kubeconfig output/kubeconfigs/kind-config-cluster1
Install Application
Install the backend application in cluster2 and export it through subctl command.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/application.yaml --kubeconfig output/kubeconfigs/kind-config-cluster2
subctl export service backend --namespace default --kubeconfig output/kubeconfigs/kind-config-cluster2
Create Gateway API Objects
Create the Gateway API objects GatewayClass, Gateway and HTTPRoute in cluster1 to set up the routing.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/multicluster-service.yaml --kubeconfig output/kubeconfigs/kind-config-cluster1
Testing the Configuration
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8888:80 &
Curl the example app through Envoy proxy:
curl --verbose --header "Host: www.example.com" http://localhost:8888/get
2.19 - Proxy Observability
Envoy Gateway provides observability for the ControlPlane and the underlying EnvoyProxy instances.
This guide show you how to config proxy observability, includes metrics, logs, and traces.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
FluentBit is used to collect logs from the EnvoyProxy instances and forward them to Loki. Install FluentBit:
helm repo add fluent https://fluent.github.io/helm-charts
helm repo update
helm upgrade --install fluent-bit fluent/fluent-bit -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/fluent-bit/helm-values.yaml -n monitoring --create-namespace --version 0.30.4
Loki is used to store logs. Install Loki:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/loki/loki.yaml -n monitoring
Tempo is used to store traces. Install Tempo:
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm upgrade --install tempo grafana/tempo -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/tempo/helm-values.yaml -n monitoring --create-namespace --version 1.3.1
OpenTelemetry Collector offers a vendor-agnostic implementation of how to receive, process and export telemetry data.
Install OTel-Collector:
helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
helm repo update
helm upgrade --install otel-collector open-telemetry/opentelemetry-collector -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/otel-collector/helm-values.yaml -n monitoring --create-namespace --version 0.60.0
Expose endpoints:
LOKI_IP=$(kubectl get svc loki -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
TEMPO_IP=$(kubectl get svc tempo -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Metrics
By default, Envoy Gateway expose metrics with prometheus endpoint.
Verify metrics:
export ENVOY_POD_NAME=$(kubectl get pod -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward pod/$ENVOY_POD_NAME -n envoy-gateway-system 19001:19001
# check metrics
curl localhost:19001/stats/prometheus | grep "default/backend/rule/0/match/0-www"
You can disable metrics by setting the telemetry.metrics.prometheus.disable
to true
in the EnvoyProxy
CRD.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/metric/disable-prometheus.yaml
Envoy Gateway can send metrics to OpenTelemetry Sink.
Send metrics to OTel-Collector:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/metric/otel-sink.yaml
Verify OTel-Collector metrics:
export OTEL_POD_NAME=$(kubectl get pod -n monitoring --selector=app.kubernetes.io/name=opentelemetry-collector -o jsonpath='{.items[0].metadata.name}')
kubectl port-forward pod/$OTEL_POD_NAME -n monitoring 19001:19001
# check metrics
curl localhost:19001/metrics | grep "default/backend/rule/0/match/0-www"
Logs
By default, Envoy Gateway send logs to stdout in default text format.
Verify logs from loki:
curl -s "http://$LOKI_IP:3100/loki/api/v1/query_range" --data-urlencode "query={job=\"fluentbit\"}" | jq '.data.result[0].values'
If you want to disable it, set the telemetry.accesslog.disable
to true
in the EnvoyProxy
CRD.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/accesslog/disable-accesslog.yaml
Envoy Gateway can send logs to OpenTelemetry Sink.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/accesslog/otel-accesslog.yaml
Verify logs from loki:
curl -s "http://$LOKI_IP:3100/loki/api/v1/query_range" --data-urlencode "query={exporter=\"OTLP\"}" | jq '.data.result[0].values'
Traces
By default, Envoy Gateway doesn’t send traces to OpenTelemetry Sink.
You can enable traces by setting the telemetry.tracing
in the EnvoyProxy
CRD.
Note: Envoy Gateway use 100% sample rate, which means all requests will be traced. This may cause performance issues.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/tracing/default.yaml
Verify traces from tempo:
curl -s "http://$TEMPO_IP:3100/api/search" --data-urlencode "q={ component=envoy }" | jq .traces
curl -s "http://$TEMPO_IP:3100/api/traces/<trace_id>" | jq
2.20 - Rate Limit
Rate limit is a feature that allows the user to limit the number of incoming requests to a predefined value based on attributes within the traffic flow.
Here are some reasons why you may want to implement Rate limits
- To prevent malicious activity such as DDoS attacks.
- To prevent applications and its resources (such as a database) from getting overloaded.
- To create API limits based on user entitlements.
Envoy Gateway supports Global rate limiting, where the rate limit is common across all the instances of Envoy proxies where its applied
i.e. if the data plane has 2 replicas of Envoy running, and the rate limit is 10 requests/second, this limit is common and will be hit
if 5 requests pass through the first replica and 5 requests pass through the second replica within the same second.
Envoy Gateway introduces a new CRD called BackendTrafficPolicy that allows the user to describe their rate limit intent. This instantiated resource
can be linked to a Gateway, HTTPRoute or GRPCRoute resource.
Prerequisites
Install Envoy Gateway
- Follow the steps from the Quickstart Guide to install Envoy Gateway and the HTTPRoute example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Install Redis
- The global rate limit feature is based on Envoy Ratelimit which requires a Redis instance as its caching layer.
Lets install a Redis deployment in the
redis-system
namespce.
cat <<EOF | kubectl apply -f -
kind: Namespace
apiVersion: v1
metadata:
name: redis-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: redis-system
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- image: redis:6.0.6
imagePullPolicy: IfNotPresent
name: redis
resources:
limits:
cpu: 1500m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
---
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: redis-system
labels:
app: redis
annotations:
spec:
ports:
- name: redis
port: 6379
protocol: TCP
targetPort: 6379
selector:
app: redis
---
EOF
Enable Global Rate limit in Envoy Gateway
- The default installation of Envoy Gateway installs a default EnvoyGateway configuration and attaches it
using a
ConfigMap
. In the next step, we will update this resource to enable rate limit in Envoy Gateway
as well as configure the URL for the Redis instance used for Global rate limiting.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
rateLimit:
backend:
type: Redis
redis:
url: redis.redis-system.svc.cluster.local:6379
EOF
- After updating the
ConfigMap
, you will need to restart the envoy-gateway
deployment so the configuration kicks in
kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
Rate Limit Specific User
Here is an example of a rate limit implemented by the application developer to limit a specific user by matching on a custom x-user-id
header
with a value set to one
.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
namespace: default
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-user-id
value: one
limit:
requests: 3
unit: Hour
EOF
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
The HTTPRoute status should indicate that it has been accepted and is bound to the example Gateway.
kubectl get httproute/http-ratelimit -o yaml
Get the Gateway’s address:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Lets query ratelimit.example/get
4 times. We should receive a 200
response from the example Gateway for the first 3 requests
and then receive a 429
status code for the 4th request since the limit is set at 3 requests/Hour for the request which contains the header x-user-id
and value one
.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
You should be able to send requests with the x-user-id
header and a different value and receive successful responses from the server.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:36 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:37 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:38 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:34:39 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
Rate Limit Distinct Users
Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the
value in the x-user-id
header. Here, user one
(recognised from the traffic flow using the header x-user-id
and value one
) will be rate limited at 3 requests/hour
and so will user two
(recognised from the traffic flow using the header x-user-id
and value two
).
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
namespace: default
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- type: Distinct
name: x-user-id
limit:
requests: 3
unit: Hour
EOF
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
Lets run the same command again with the header x-user-id
and value one
set in the request. We should the first 3 requests succeeding and
the 4th request being rate limited.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: one" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
You should see the same behavior when the value for header x-user-id
is set to two
and 4 requests are sent.
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "x-user-id: two" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
Rate Limit All Requests
This example shows you how to rate limit all requests matching the HTTPRoute rule at 3 requests/Hour by leaving the clientSelectors
field unset.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
namespace: default
rateLimit:
type: Global
global:
rules:
- limit:
requests: 3
unit: Hour
EOF
HTTPRoute
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:31 GMT
content-length: 460
x-envoy-upstream-service-time: 4
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:32 GMT
content-length: 460
x-envoy-upstream-service-time: 2
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Wed, 08 Feb 2023 02:33:33 GMT
content-length: 460
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Wed, 08 Feb 2023 02:33:34 GMT
server: envoy
transfer-encoding: chunked
Rate Limit Client IP Addresses
Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on their
IP address (also reflected in the X-Forwarded-For
header).
Note: EG supports two kinds of rate limit for the IP address: exact and distinct.
- exact means that all IP addresses within the specified Source IP CIDR share the same rate limit bucket.
- distinct means that each IP address within the specified Source IP CIDR has its own rate limit bucket.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: http-ratelimit
namespace: default
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- sourceCIDR:
value: 0.0.0.0/0
type: distinct
limit:
requests: 3
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: http-ratelimit
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
EOF
for i in {1..4}; do curl -I --header "Host: ratelimit.example" http://${GATEWAY_HOST}/get ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:45 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:46 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Tue, 28 Mar 2023 08:28:48 GMT
content-length: 512
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Tue, 28 Mar 2023 08:28:48 GMT
server: envoy
transfer-encoding: chunked
Rate Limit Jwt Claims
Here is an example of a rate limit implemented by the application developer to limit distinct users who can be differentiated based on the value of the Jwt claims carried.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: SecurityPolicy
metadata:
name: jwt-example
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
jwt:
providers:
- name: example
remoteJWKS:
uri: https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/jwks.json
claimToHeaders:
- claim: name
header: x-claim-name
---
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: BackendTrafficPolicy
metadata:
name: policy-httproute
spec:
targetRef:
group: gateway.networking.k8s.io
kind: HTTPRoute
name: example
rateLimit:
type: Global
global:
rules:
- clientSelectors:
- headers:
- name: x-claim-name
value: John Doe
limit:
requests: 3
unit: Hour
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: example
spec:
parentRefs:
- name: eg
hostnames:
- ratelimit.example
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /foo
EOF
Get the JWT used for testing request authentication:
TOKEN=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/test.jwt -s) && echo "$TOKEN" | cut -d '.' -f2 - | base64 --decode
TOKEN1=$(curl https://raw.githubusercontent.com/envoyproxy/gateway/main/examples/kubernetes/jwt/with-different-claim.jwt -s) && echo "$TOKEN1" | cut -d '.' -f2 - | base64 --decode
Rate limit by carrying TOKEN
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:25 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:26 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:00:27 GMT
content-length: 561
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 429 Too Many Requests
x-envoy-ratelimited: true
date: Mon, 12 Jun 2023 12:00:28 GMT
server: envoy
transfer-encoding: chunked
No Rate Limit by carrying TOKEN1
for i in {1..4}; do curl -I --header "Host: ratelimit.example" --header "Authorization: Bearer $TOKEN1" http://${GATEWAY_HOST}/foo ; sleep 1; done
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:34 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:35 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:36 GMT
content-length: 556
x-envoy-upstream-service-time: 1
server: envoy
HTTP/1.1 200 OK
content-type: application/json
x-content-type-options: nosniff
date: Mon, 12 Jun 2023 12:02:37 GMT
content-length: 556
x-envoy-upstream-service-time: 0
server: envoy
(Optional) Editing Kubernetes Resources settings for the Rate Limit Service
The default installation of Envoy Gateway installs a default EnvoyGateway configuration and provides the initial rate
limit kubernetes resources settings. such as replicas
is 1, requests resources cpu is 100m
, memory is 512Mi
. the others
like container image
, securityContext
, env
and pod annotations
and securityContext
can be modified by modifying the ConfigMap
.
tls.certificateRef
set the client certificate for redis server TLS connections.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: envoy-gateway-config
namespace: envoy-gateway-system
data:
envoy-gateway.yaml: |
apiVersion: gateway.envoyproxy.io/v1alpha1
kind: EnvoyGateway
provider:
type: Kubernetes
kubernetes:
rateLimitDeployment:
replicas: 1
container:
image: envoyproxy/ratelimit:master
env:
- name: CACHE_KEY_PREFIX
value: "eg:rl:"
resources:
requests:
cpu: 100m
memory: 512Mi
securityContext:
runAsUser: 2000
allowPrivilegeEscalation: false
pod:
annotations:
key1: val1
key2: val2
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
fsGroupChangePolicy: "OnRootMismatch"
gateway:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
rateLimit:
backend:
type: Redis
redis:
url: redis.redis-system.svc.cluster.local:6379
tls:
certificateRef:
name: ratelimit-cert
EOF
- After updating the
ConfigMap
, you will need to restart the envoy-gateway
deployment so the configuration kicks in
kubectl rollout restart deployment envoy-gateway -n envoy-gateway-system
2.21 - Secure Gateways
This guide will help you get started using secure Gateways. The guide uses a self-signed CA, so it should be used for
testing and demonstration purposes only.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
TLS Certificates
Generate the certificates and keys used by the Gateway to terminate client TLS connections.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for www.example.com
:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt
Update the Gateway from the Quickstart guide to include an HTTPS listener that listens on port 443
and references the
example-cert
Secret:
kubectl patch gateway eg --type=json --patch '[{
"op": "add",
"path": "/spec/listeners/-",
"value": {
"name": "https",
"protocol": "HTTPS",
"port": 443,
"tls": {
"mode": "Terminate",
"certificateRefs": [{
"kind": "Secret",
"group": "",
"name": "example-cert",
}],
},
},
}]'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Testing
Clusters without External LoadBalancer Support
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &
Query the example app through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get
Clusters with External LoadBalancer Support
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Query the example app through the Gateway:
curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get
Multiple HTTPS Listeners
Create a TLS cert/key for the additional HTTPS listener:
openssl req -out foo.example.com.csr -newkey rsa:2048 -nodes -keyout foo.example.com.key -subj "/CN=foo.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in foo.example.com.csr -out foo.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls foo-cert --key=foo.example.com.key --cert=foo.example.com.crt
Create another HTTPS listener on the example Gateway:
kubectl patch gateway eg --type=json --patch '[{
"op": "add",
"path": "/spec/listeners/-",
"value": {
"name": "https-foo",
"protocol": "HTTPS",
"port": 443,
"hostname": "foo.example.com",
"tls": {
"mode": "Terminate",
"certificateRefs": [{
"kind": "Secret",
"group": "",
"name": "foo-cert",
}],
},
},
}]'
Update the HTTPRoute to route traffic for hostname foo.example.com
to the example backend service:
kubectl patch httproute backend --type=json --patch '[{
"op": "add",
"path": "/spec/hostnames/-",
"value": "foo.example.com",
}]'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Follow the steps in the Testing section to test connectivity to the backend app through both Gateway
listeners. Replace www.example.com
with foo.example.com
to test the new HTTPS listener.
Cross Namespace Certificate References
A Gateway can be configured to reference a certificate in a different namespace. This is allowed by a ReferenceGrant
created in the target namespace. Without the ReferenceGrant, a cross-namespace reference is invalid.
Before proceeding, ensure you can query the HTTPS backend service from the Testing section.
To demonstrate cross namespace certificate references, create a ReferenceGrant that allows Gateways from the “default”
namespace to reference Secrets in the “envoy-gateway-system” namespace:
$ cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: ReferenceGrant
metadata:
name: example
namespace: envoy-gateway-system
spec:
from:
- group: gateway.networking.k8s.io
kind: Gateway
namespace: default
to:
- group: ""
kind: Secret
EOF
Delete the previously created Secret:
kubectl delete secret/example-cert
The Gateway HTTPS listener should now surface the Ready: False
status condition and the example HTTPS backend should
no longer be reachable through the Gateway.
kubectl get gateway/eg -o yaml
Recreate the example Secret in the envoy-gateway-system
namespace:
kubectl create secret tls example-cert -n envoy-gateway-system --key=www.example.com.key --cert=www.example.com.crt
Update the Gateway HTTPS listener with namespace: envoy-gateway-system
, for example:
$ cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
- name: https
protocol: HTTPS
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
group: ""
name: example-cert
namespace: envoy-gateway-system
EOF
The Gateway HTTPS listener status should now surface the Ready: True
condition and you should once again be able to
query the HTTPS backend through the Gateway.
Lastly, test connectivity using the above Testing section.
Clean-Up
Follow the steps from the Quickstart Guide to uninstall Envoy Gateway and the example manifest.
Delete the Secrets:
kubectl delete secret/example-cert
kubectl delete secret/foo-cert
RSA + ECDSA Dual stack certificates
This section gives a walkthrough to generate RSA and ECDSA derived certificates and keys for the Server, which can then be configured in the Gateway listener, to terminate TLS traffic.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Follow the steps in the TLS Certificates section in the guide to generate self-signed RSA derived Server certificate and private key, and configure those in the Gateway listener configuration to terminate HTTPS traffic.
Pre-checks
While testing in Cluster without External LoadBalancer Support, we can query the example app through Envoy proxy while enforcing an RSA cipher, as shown below:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-RSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
Since the Secret configured at this point is an RSA based Secret, if we enforce the usage of an ECDSA cipher, the call should fail as follows
$ curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-ECDSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
* Added www.example.com:8443:127.0.0.1 to DNS cache
* Hostname www.example.com was found in DNS cache
* Trying 127.0.0.1:8443...
* Connected to www.example.com (127.0.0.1) port 8443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
* Cipher selection: ECDHE-ECDSA-CHACHA20-POLY1305
* CAfile: example.com.crt
* CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* error:1404B410:SSL routines:ST_CONNECT:sslv3 alert handshake failure
* Closing connection 0
Moving forward in the doc, we will be configuring the existing Gateway listener to accept both kinds of ciphers.
TLS Certificates
Reuse the CA certificate and key pair generated in the Secure Gateways guide and use this CA to sign both RSA and ECDSA Server certificates.
Note the CA certificate and key names are example.com.crt
and example.com.key
respectively.
Create an ECDSA certificate and a private key for www.example.com
:
openssl ecparam -noout -genkey -name prime256v1 -out www.example.com.ecdsa.key
openssl req -new -SHA384 -key www.example.com.ecdsa.key -nodes -out www.example.com.ecdsa.csr -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -SHA384 -days 365 -in www.example.com.ecdsa.csr -CA example.com.crt -CAkey example.com.key -CAcreateserial -out www.example.com.ecdsa.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert-ecdsa --key=www.example.com.ecdsa.key --cert=www.example.com.ecdsa.crt
Patch the Gateway with this additional ECDSA Secret:
kubectl patch gateway eg --type=json --patch '[{
"op": "add",
"path": "/spec/listeners/1/tls/certificateRefs/-",
"value": {
"name": "example-cert-ecdsa",
},
}]'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Testing
Again, while testing in Cluster without External LoadBalancer Support, we can query the example app through Envoy proxy while enforcing an RSA cipher, which should work as it did before:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-RSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
...
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
...
Additionally, querying the example app while enforcing an ECDSA cipher should also work now:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -Isv --ciphers ECDHE-ECDSA-CHACHA20-POLY1305 --tlsv1.2 --tls-max 1.2
...
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-CHACHA20-POLY1305
...
SNI based Certificate selection
This sections gives a walkthrough to generate multiple certificates corresponding to different FQDNs. The same Gateway listener can then be configured to terminate TLS traffic for multiple FQDNs based on the SNI matching.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Follow the steps in the TLS Certificates section in the guide to generate self-signed RSA derived Server certificate and private key, and configure those in the Gateway listener configuration to terminate HTTPS traffic.
Additional Configurations
Using the TLS Certificates section in the guide we first generate additional Secret for another Host www.sample.com
.
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=sample Inc./CN=sample.com' -keyout sample.com.key -out sample.com.crt
openssl req -out www.sample.com.csr -newkey rsa:2048 -nodes -keyout www.sample.com.key -subj "/CN=www.sample.com/O=sample organization"
openssl x509 -req -days 365 -CA sample.com.crt -CAkey sample.com.key -set_serial 0 -in www.sample.com.csr -out www.sample.com.crt
kubectl create secret tls sample-cert --key=www.sample.com.key --cert=www.sample.com.crt
Note that all occurrences of example.com
were just replaced with sample.com
Next we update the Gateway
configuration to accommodate the new Certificate which will be used to Terminate TLS traffic:
kubectl patch gateway eg --type=json --patch '[{
"op": "add",
"path": "/spec/listeners/1/tls/certificateRefs/-",
"value": {
"name": "sample-cert",
},
}]'
Finally, we update the HTTPRoute to route traffic for hostname www.sample.com
to the example backend service:
kubectl patch httproute backend --type=json --patch '[{
"op": "add",
"path": "/spec/hostnames/-",
"value": "www.sample.com",
}]'
Testing
Clusters without External LoadBalancer Support
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &
Query the example app through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get -I
Similarly, query the sample app through the same Envoy proxy:
curl -v -HHost:www.sample.com --resolve "www.sample.com:8443:127.0.0.1" \
--cacert sample.com.crt https://www.sample.com:8443/get -I
Since the multiple certificates are configured on the same Gateway listener, Envoy was able to provide the client with appropriate certificate based on the SNI in the client request.
Clusters with External LoadBalancer Support
Refer to the steps mentioned earlier in the guide under Testing in clusters with External LoadBalancer Support
Next Steps
Checkout the Developer Guide to get involved in the project.
2.22 - TCP Routing
TCPRoute provides a way to route TCP requests. When combined with a Gateway listener, it can be used to forward
connections on the port specified by the listener to a set of backends specified by the TCPRoute. To learn more about
HTTP routing, refer to the Gateway API documentation.
Installation
Install Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
Configuration
In this example, we have one Gateway resource and two TCPRoute resources that distribute the traffic with the following
rules:
All TCP streams on port 8088
of the Gateway are forwarded to port 3001 of foo
Kubernetes Service.
All TCP streams on port 8089
of the Gateway are forwarded to port 3002 of bar
Kubernetes Service.
In this example two TCP listeners will be applied to the Gateway in order to route them to two separate backend
TCPRoutes, note that the protocol set for the listeners on the Gateway is TCP:
Install the GatewayClass and a tcp-gateway
Gateway first.
cat <<EOF | kubectl apply -f -
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: tcp-gateway
spec:
gatewayClassName: eg
listeners:
- name: foo
protocol: TCP
port: 8088
allowedRoutes:
kinds:
- kind: TCPRoute
- name: bar
protocol: TCP
port: 8089
allowedRoutes:
kinds:
- kind: TCPRoute
EOF
Install two services foo
and bar
, which are binded to backend-1
and backend-2
.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: foo
labels:
app: backend-1
spec:
ports:
- name: http
port: 3001
targetPort: 3000
selector:
app: backend-1
---
apiVersion: v1
kind: Service
metadata:
name: bar
labels:
app: backend-2
spec:
ports:
- name: http
port: 3002
targetPort: 3000
selector:
app: backend-2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-1
spec:
replicas: 1
selector:
matchLabels:
app: backend-1
version: v1
template:
metadata:
labels:
app: backend-1
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
imagePullPolicy: IfNotPresent
name: backend-1
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_NAME
value: foo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-2
spec:
replicas: 1
selector:
matchLabels:
app: backend-2
version: v1
template:
metadata:
labels:
app: backend-2
version: v1
spec:
containers:
- image: gcr.io/k8s-staging-ingressconformance/echoserver:v20221109-7ee2f3e
imagePullPolicy: IfNotPresent
name: backend-2
ports:
- containerPort: 3000
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SERVICE_NAME
value: bar
EOF
Install two TCPRoutes tcp-app-1
and tcp-app-2
with different sectionName
:
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: tcp-app-1
spec:
parentRefs:
- name: tcp-gateway
sectionName: foo
rules:
- backendRefs:
- name: foo
port: 3001
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TCPRoute
metadata:
name: tcp-app-2
spec:
parentRefs:
- name: tcp-gateway
sectionName: bar
rules:
- backendRefs:
- name: bar
port: 3002
EOF
In the above example we separate the traffic for the two separate backend TCP Services by using the sectionName field in
the parentRefs:
spec:
parentRefs:
- name: tcp-gateway
sectionName: foo
This corresponds directly with the name in the listeners in the Gateway:
listeners:
- name: foo
protocol: TCP
port: 8088
- name: bar
protocol: TCP
port: 8089
In this way each TCPRoute “attaches” itself to a different port on the Gateway so that the foo
service
is taking traffic for port 8088
from outside the cluster and bar
service takes the port 8089
traffic.
Before testing, please get the tcp-gateway Gateway’s address first:
export GATEWAY_HOST=$(kubectl get gateway/tcp-gateway -o jsonpath='{.status.addresses[0].value}')
You can try to use nc to test the TCP connections of envoy gateway with different ports, and you can see them succeeded:
nc -zv ${GATEWAY_HOST} 8088
nc -zv ${GATEWAY_HOST} 8089
You can also try to send requests to envoy gateway and get responses as shown below:
curl -i "http://${GATEWAY_HOST}:8088"
HTTP/1.1 200 OK
Content-Type: application/json
X-Content-Type-Options: nosniff
Date: Tue, 03 Jan 2023 10:18:36 GMT
Content-Length: 267
{
"path": "/",
"host": "xxx.xxx.xxx.xxx:8088",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
]
},
"namespace": "default",
"ingress": "",
"service": "foo",
"pod": "backend-1-c6c5fb958-dl8vl"
}
You can see that the traffic routing to foo
service when sending request to 8088
port.
curl -i "http://${GATEWAY_HOST}:8089"
HTTP/1.1 200 OK
Content-Type: application/json
X-Content-Type-Options: nosniff
Date: Tue, 03 Jan 2023 10:19:28 GMT
Content-Length: 267
{
"path": "/",
"host": "xxx.xxx.xxx.xxx:8089",
"method": "GET",
"proto": "HTTP/1.1",
"headers": {
"Accept": [
"*/*"
],
"User-Agent": [
"curl/7.85.0"
]
},
"namespace": "default",
"ingress": "",
"service": "bar",
"pod": "backend-2-98fcff498-hcmgb"
}
You can see that the traffic routing to bar
service when sending request to 8089
port.
2.23 - TLS Passthrough
This guide will walk through the steps required to configure TLS Passthrough via Envoy Gateway. Unlike configuring
Secure Gateways, where the Gateway terminates the client TLS connection, TLS Passthrough allows the application itself
to terminate the TLS connection, while the Gateway routes the requests to the application based on SNI headers.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
TLS Certificates
Generate the certificates and keys used by the Service to terminate client TLS connections.
For the application, we’ll deploy a sample echoserver app, with the certificates loaded in the application Pod.
Note: These certificates will not be used by the Gateway, but will remain in the application scope.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for passthrough.example.com
:
openssl req -out passthrough.example.com.csr -newkey rsa:2048 -nodes -keyout passthrough.example.com.key -subj "/CN=passthrough.example.com/O=some organization"
openssl x509 -req -sha256 -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in passthrough.example.com.csr -out passthrough.example.com.crt
Store the cert/keys in A Secret:
kubectl create secret tls server-certs --key=passthrough.example.com.key --cert=passthrough.example.com.crt
Deployment
Deploy TLS Passthrough application Deployment, Service and TLSRoute:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/tls-passthrough.yaml
Patch the Gateway from the Quickstart guide to include a TLS listener that listens on port 6443
and is configured for
TLS mode Passthrough:
kubectl patch gateway eg --type=json --patch '[{
"op": "add",
"path": "/spec/listeners/-",
"value": {
"name": "tls",
"protocol": "TLS",
"hostname": "passthrough.example.com",
"tls": {"mode": "Passthrough"},
"port": 6443,
},
}]'
Testing
Clusters without External LoadBalancer Support
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 6043:6443 &
Curl the example app through Envoy proxy:
curl -v --resolve "passthrough.example.com:6043:127.0.0.1" https://passthrough.example.com:6043 \
--cacert passthrough.example.com.crt
Clusters with External LoadBalancer Support
You can also test the same functionality by sending traffic to the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Curl the example app through the Gateway, e.g. Envoy proxy:
curl -v -HHost:passthrough.example.com --resolve "passthrough.example.com:6443:${GATEWAY_HOST}" \
--cacert example.com.crt https://passthrough.example.com:6443/get
Clean-Up
Follow the steps from the Quickstart Guide to uninstall Envoy Gateway and the example manifest.
Delete the Secret:
kubectl delete secret/server-certs
Next Steps
Checkout the Developer Guide to get involved in the project.
2.24 - TLS Termination for TCP
This guide will walk through the steps required to configure TLS Terminate mode for TCP traffic via Envoy Gateway. The guide uses a self-signed CA, so it should be used for testing and demonstration purposes only.
Prerequisites
- OpenSSL to generate TLS assets.
Installation
Follow the steps from the Quickstart Guide to install Envoy Gateway.
TLS Certificates
Generate the certificates and keys used by the Gateway to terminate client TLS connections.
Create a root certificate and private key to sign certificates:
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -subj '/O=example Inc./CN=example.com' -keyout example.com.key -out example.com.crt
Create a certificate and a private key for www.example.com
:
openssl req -out www.example.com.csr -newkey rsa:2048 -nodes -keyout www.example.com.key -subj "/CN=www.example.com/O=example organization"
openssl x509 -req -days 365 -CA example.com.crt -CAkey example.com.key -set_serial 0 -in www.example.com.csr -out www.example.com.crt
Store the cert/key in a Secret:
kubectl create secret tls example-cert --key=www.example.com.key --cert=www.example.com.crt
Install the TLS Termination for TCP example resources:
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/tls-termination.yaml
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Testing
Clusters without External LoadBalancer Support
Get the name of the Envoy service created the by the example Gateway:
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward to the Envoy service:
kubectl -n envoy-gateway-system port-forward service/${ENVOY_SERVICE} 8443:443 &
Query the example app through Envoy proxy:
curl -v -HHost:www.example.com --resolve "www.example.com:8443:127.0.0.1" \
--cacert example.com.crt https://www.example.com:8443/get
Clusters with External LoadBalancer Support
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Query the example app through the Gateway:
curl -v -HHost:www.example.com --resolve "www.example.com:443:${GATEWAY_HOST}" \
--cacert example.com.crt https://www.example.com/get
2.25 - UDP Routing
The UDPRoute resource allows users to configure UDP routing by matching UDP traffic and forwarding it to Kubernetes
backends. This guide will use CoreDNS example to walk you through the steps required to configure UDPRoute on Envoy
Gateway.
Note: UDPRoute allows Envoy Gateway to operate as a non-transparent proxy between a UDP client and server. The lack
of transparency means that the upstream server will see the source IP and port of the Gateway instead of the client.
For additional information, refer to Envoy’s UDP proxy documentation.
Prerequisites
Install Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.6.0 -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
Installation
Install CoreDNS in the Kubernetes cluster as the example backend. The installed CoreDNS is listening on
UDP port 53 for DNS lookups.
kubectl apply -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/kubernetes/udp-routing-example-backend.yaml
Wait for the CoreDNS deployment to become available:
kubectl wait --timeout=5m deployment/coredns --for=condition=Available
Update the Gateway from the Quickstart guide to include a UDP listener that listens on UDP port 5300
:
kubectl patch gateway eg --type=json --patch '[{
"op": "add",
"path": "/spec/listeners/-",
"value": {
"name": "coredns",
"protocol": "UDP",
"port": 5300,
"allowedRoutes": {
"kinds": [{
"kind": "UDPRoute"
}]
}
},
}]'
Verify the Gateway status:
kubectl get gateway/eg -o yaml
Configuration
Create a UDPRoute resource to route UDP traffic received on Gateway port 5300 to the CoredDNS backend.
cat <<EOF | kubectl apply -f -
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: UDPRoute
metadata:
name: coredns
spec:
parentRefs:
- name: eg
sectionName: coredns
rules:
- backendRefs:
- name: coredns
port: 53
EOF
Verify the UDPRoute status:
kubectl get udproute/coredns -o yaml
Testing
Get the External IP of the Gateway:
export GATEWAY_HOST=$(kubectl get gateway/eg -o jsonpath='{.status.addresses[0].value}')
Use dig
command to query the dns entry foo.bar.com through the Gateway.
dig @${GATEWAY_HOST} -p 5300 foo.bar.com
You should see the result of the dns query as the below output, which means that the dns query has been successfully
routed to the backend CoreDNS.
Note: 49.51.177.138 is the resolved address of GATEWAY_HOST.
; <<>> DiG 9.18.1-1ubuntu1.1-Ubuntu <<>> @49.51.177.138 -p 5300 foo.bar.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 58125
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 3
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: 24fb86eba96ebf62 (echoed)
;; QUESTION SECTION:
;foo.bar.com. IN A
;; ADDITIONAL SECTION:
foo.bar.com. 0 IN A 10.244.0.19
_udp.foo.bar.com. 0 IN SRV 0 0 42376 .
;; Query time: 1 msec
;; SERVER: 49.51.177.138#5300(49.51.177.138) (UDP)
;; WHEN: Fri Jan 13 10:20:34 UTC 2023
;; MSG SIZE rcvd: 114
Clean-Up
Follow the steps from the Quickstart Guide to uninstall Envoy Gateway.
Delete the CoreDNS example manifest and the UDPRoute:
kubectl delete deploy/coredns
kubectl delete service/coredns
kubectl delete cm/coredns
kubectl delete udproute/coredns
Next Steps
Checkout the Developer Guide to get involved in the project.
2.26 - Use egctl
egctl
is a command line tool to provide additional functionality for Envoy Gateway users.
egctl experimental translate
This subcommand allows users to translate from an input configuration type to an output configuration type.
In the below example, we will translate the Kubernetes resources (including the Gateway API resources) into xDS
resources.
cat <<EOF | egctl x translate --from gateway-api --to xds -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: default
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: default
labels:
app: backend
service: backend
spec:
clusterIP: "1.1.1.1"
type: ClusterIP
ports:
- name: http
port: 3000
targetPort: 3000
protocol: TCP
selector:
app: backend
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
configKey: default-eg
configs:
- '@type': type.googleapis.com/envoy.admin.v3.BootstrapConfigDump
bootstrap:
admin:
accessLog:
- name: envoy.access_loggers.file
typedConfig:
'@type': type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/null
address:
socketAddress:
address: 127.0.0.1
portValue: 19000
dynamicResources:
cdsConfig:
apiConfigSource:
apiType: DELTA_GRPC
grpcServices:
- envoyGrpc:
clusterName: xds_cluster
setNodeOnFirstMessageOnly: true
transportApiVersion: V3
resourceApiVersion: V3
ldsConfig:
apiConfigSource:
apiType: DELTA_GRPC
grpcServices:
- envoyGrpc:
clusterName: xds_cluster
setNodeOnFirstMessageOnly: true
transportApiVersion: V3
resourceApiVersion: V3
layeredRuntime:
layers:
- name: runtime-0
rtdsLayer:
name: runtime-0
rtdsConfig:
apiConfigSource:
apiType: DELTA_GRPC
grpcServices:
- envoyGrpc:
clusterName: xds_cluster
transportApiVersion: V3
resourceApiVersion: V3
staticResources:
clusters:
- connectTimeout: 10s
loadAssignment:
clusterName: xds_cluster
endpoints:
- lbEndpoints:
- endpoint:
address:
socketAddress:
address: envoy-gateway
portValue: 18000
name: xds_cluster
transportSocket:
name: envoy.transport_sockets.tls
typedConfig:
'@type': type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
commonTlsContext:
tlsCertificateSdsSecretConfigs:
- name: xds_certificate
sdsConfig:
pathConfigSource:
path: /sds/xds-certificate.json
resourceApiVersion: V3
tlsParams:
tlsMaximumProtocolVersion: TLSv1_3
validationContextSdsSecretConfig:
name: xds_trusted_ca
sdsConfig:
pathConfigSource:
path: /sds/xds-trusted-ca.json
resourceApiVersion: V3
type: STRICT_DNS
typedExtensionProtocolOptions:
envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
'@type': type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
explicitHttpConfig:
http2ProtocolOptions: {}
- '@type': type.googleapis.com/envoy.admin.v3.ClustersConfigDump
dynamicActiveClusters:
- cluster:
'@type': type.googleapis.com/envoy.config.cluster.v3.Cluster
commonLbConfig:
localityWeightedLbConfig: {}
connectTimeout: 10s
dnsLookupFamily: V4_ONLY
loadAssignment:
clusterName: default-backend-rule-0-match-0-www.example.com
endpoints:
- lbEndpoints:
- endpoint:
address:
socketAddress:
address: 1.1.1.1
portValue: 3000
loadBalancingWeight: 1
loadBalancingWeight: 1
locality: {}
name: default-backend-rule-0-match-0-www.example.com
outlierDetection: {}
type: STATIC
- '@type': type.googleapis.com/envoy.admin.v3.ListenersConfigDump
dynamicListeners:
- activeState:
listener:
'@type': type.googleapis.com/envoy.config.listener.v3.Listener
accessLog:
- filter:
responseFlagFilter:
flags:
- NR
name: envoy.access_loggers.file
typedConfig:
'@type': type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
address:
socketAddress:
address: 0.0.0.0
portValue: 10080
defaultFilterChain:
filters:
- name: envoy.filters.network.http_connection_manager
typedConfig:
'@type': type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
accessLog:
- name: envoy.access_loggers.file
typedConfig:
'@type': type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/stdout
httpFilters:
- name: envoy.filters.http.router
typedConfig:
'@type': type.googleapis.com/envoy.extensions.filters.http.router.v3.Router
rds:
configSource:
apiConfigSource:
apiType: DELTA_GRPC
grpcServices:
- envoyGrpc:
clusterName: xds_cluster
setNodeOnFirstMessageOnly: true
transportApiVersion: V3
resourceApiVersion: V3
routeConfigName: default-eg-http
statPrefix: http
upgradeConfigs:
- upgradeType: websocket
useRemoteAddress: true
name: default-eg-http
- '@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump
dynamicRouteConfigs:
- routeConfig:
'@type': type.googleapis.com/envoy.config.route.v3.RouteConfiguration
name: default-eg-http
virtualHosts:
- domains:
- www.example.com
name: default-eg-http-www.example.com
routes:
- match:
prefix: /
route:
cluster: default-backend-rule-0-match-0-www.example.com
resourceType: all
You can also use the --type
/-t
flag to retrieve specific output types. In the below example, we will translate the Kubernetes resources (including the Gateway API resources) into xDS route
resources.
cat <<EOF | egctl x translate --from gateway-api --to xds -t route -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: v1
kind: Namespace
metadata:
name: default
---
apiVersion: v1
kind: Service
metadata:
name: backend
namespace: default
labels:
app: backend
service: backend
spec:
clusterIP: "1.1.1.1"
type: ClusterIP
ports:
- name: http
port: 3000
targetPort: 3000
protocol: TCP
selector:
app: backend
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
'@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump
configKey: default-eg
dynamicRouteConfigs:
- routeConfig:
'@type': type.googleapis.com/envoy.config.route.v3.RouteConfiguration
name: default-eg-http
virtualHosts:
- domains:
- www.example.com
name: default-eg-http
routes:
- match:
prefix: /
route:
cluster: default-backend-rule-0-match-0-www.example.com
resourceType: route
Add Missing Resources
You can pass the --add-missing-resources
flag to use dummy non Gateway API resources instead of specifying them explicitly.
For example, this will provide the similar result as the above:
cat <<EOF | egctl x translate --add-missing-resources --from gateway-api --to gateway-api -t route -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
namespace: default
spec:
parentRefs:
- name: eg
hostnames:
- "www.example.com"
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
You can see the output contains a EnvoyProxy resource that
can be used as a starting point to modify the xDS bootstrap resource for the managed Envoy Proxy fleet.
envoyProxy:
metadata:
creationTimestamp: null
name: default-envoy-proxy
namespace: envoy-gateway-system
spec:
bootstrap: |
admin:
access_log:
- name: envoy.access_loggers.file
typed_config:
"@type": type.googleapis.com/envoy.extensions.access_loggers.file.v3.FileAccessLog
path: /dev/null
address:
socket_address:
address: 127.0.0.1
port_value: 19000
dynamic_resources:
ads_config:
api_type: DELTA_GRPC
transport_api_version: V3
grpc_services:
- envoy_grpc:
cluster_name: xds_cluster
set_node_on_first_message_only: true
lds_config:
ads: {}
resource_api_version: V3
cds_config:
ads: {}
resource_api_version: V3
static_resources:
clusters:
- connect_timeout: 10s
load_assignment:
cluster_name: xds_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: envoy-gateway
port_value: 18000
typed_extension_protocol_options:
"envoy.extensions.upstreams.http.v3.HttpProtocolOptions":
"@type": "type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions"
"explicit_http_config":
"http2_protocol_options": {}
name: xds_cluster
type: STRICT_DNS
transport_socket:
name: envoy.transport_sockets.tls
typed_config:
"@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext
common_tls_context:
tls_params:
tls_maximum_protocol_version: TLSv1_3
tls_certificate_sds_secret_configs:
- name: xds_certificate
sds_config:
path_config_source:
path: "/sds/xds-certificate.json"
resource_api_version: V3
validation_context_sds_secret_config:
name: xds_trusted_ca
sds_config:
path_config_source:
path: "/sds/xds-trusted-ca.json"
resource_api_version: V3
layered_runtime:
layers:
- name: runtime-0
rtds_layer:
rtds_config:
ads: {}
resource_api_version: V3
name: runtime-0
logging: {}
status: {}
gatewayClass:
metadata:
creationTimestamp: null
name: eg
namespace: envoy-gateway-system
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parametersRef:
group: gateway.envoyproxy.io
kind: EnvoyProxy
name: default-envoy-proxy
namespace: envoy-gateway-system
status:
conditions:
- lastTransitionTime: "2023-04-19T20:30:46Z"
message: Valid GatewayClass
reason: Accepted
status: "True"
type: Accepted
gateways:
- metadata:
creationTimestamp: null
name: eg
namespace: default
spec:
gatewayClassName: eg
listeners:
- name: http
port: 80
protocol: HTTP
status:
listeners:
- attachedRoutes: 1
conditions:
- lastTransitionTime: "2023-04-19T20:30:46Z"
message: Sending translated listener configuration to the data plane
reason: Programmed
status: "True"
type: Programmed
- lastTransitionTime: "2023-04-19T20:30:46Z"
message: Listener has been successfully translated
reason: Accepted
status: "True"
type: Accepted
name: http
supportedKinds:
- group: gateway.networking.k8s.io
kind: HTTPRoute
- group: gateway.networking.k8s.io
kind: GRPCRoute
httpRoutes:
- metadata:
creationTimestamp: null
name: backend
namespace: default
spec:
hostnames:
- www.example.com
parentRefs:
- name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
status:
parents:
- conditions:
- lastTransitionTime: "2023-04-19T20:30:46Z"
message: Route is accepted
reason: Accepted
status: "True"
type: Accepted
- lastTransitionTime: "2023-04-19T20:30:46Z"
message: Resolved all the Object references for the Route
reason: ResolvedRefs
status: "True"
type: ResolvedRefs
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parentRef:
name: eg
Sometimes you might find that egctl doesn’t provide an expected result. For example, the following example provides an empty route resource:
cat <<EOF | egctl x translate --from gateway-api --type route --to xds -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: tls
protocol: TLS
port: 8443
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
parentRefs:
- name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
xds:
envoy-gateway-system-eg:
'@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump
Validating Gateway API Configuration
You can add an additional target gateway-api
to show the processed Gateway API resources. For example, translating the above resources with the new argument shows that the HTTPRoute is rejected because there is no ready listener for it:
cat <<EOF | egctl x translate --from gateway-api --type route --to gateway-api,xds -f -
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: eg
spec:
gatewayClassName: eg
listeners:
- name: tls
protocol: TLS
port: 8443
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: backend
spec:
parentRefs:
- name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
EOF
gatewayClass:
metadata:
creationTimestamp: null
name: eg
namespace: envoy-gateway-system
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
status:
conditions:
- lastTransitionTime: "2023-04-19T20:54:52Z"
message: Valid GatewayClass
reason: Accepted
status: "True"
type: Accepted
gateways:
- metadata:
creationTimestamp: null
name: eg
namespace: envoy-gateway-system
spec:
gatewayClassName: eg
listeners:
- name: tls
port: 8443
protocol: TLS
status:
listeners:
- attachedRoutes: 0
conditions:
- lastTransitionTime: "2023-04-19T20:54:52Z"
message: Listener must have TLS set when protocol is TLS.
reason: Invalid
status: "False"
type: Programmed
name: tls
supportedKinds:
- group: gateway.networking.k8s.io
kind: TLSRoute
httpRoutes:
- metadata:
creationTimestamp: null
name: backend
namespace: envoy-gateway-system
spec:
parentRefs:
- name: eg
rules:
- backendRefs:
- group: ""
kind: Service
name: backend
port: 3000
weight: 1
matches:
- path:
type: PathPrefix
value: /
status:
parents:
- conditions:
- lastTransitionTime: "2023-04-19T20:54:52Z"
message: There are no ready listeners for this parent ref
reason: NoReadyListeners
status: "False"
type: Accepted
- lastTransitionTime: "2023-04-19T20:54:52Z"
message: Service envoy-gateway-system/backend not found
reason: BackendNotFound
status: "False"
type: ResolvedRefs
controllerName: gateway.envoyproxy.io/gatewayclass-controller
parentRef:
name: eg
xds:
envoy-gateway-system-eg:
'@type': type.googleapis.com/envoy.admin.v3.RoutesConfigDump
2.27 - Using cert-manager For TLS Termination
This guide shows how to set up cert-manager to automatically create certificates and secrets for use by Envoy Gateway.
It will first show how to enable the self-sign issuer, which is useful to test that cert-manager and Envoy Gateway can talk to each other.
Then it shows how to use Let’s Encrypt’s staging environment.
Changing to the Let’s Encrypt production environment is straight-forward after that.
Prerequisites
- A Kubernetes cluster and a configured
kubectl
. - The
helm
command. - The
curl
command or similar for testing HTTPS requests. - For the ACME HTTP-01 challenge to work
- your Gateway must be reachable on the public Internet.
- the domain name you use (we use
www.example.com
) must point to the Gateway’s external IP(s).
Installation
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Deploying cert-manager
This is a summary of cert-manager Installation with Helm.
Installing cert-manager is straight-forward, but currently (v1.12) requires setting a feature gate to enable the Gateway API support.
$ helm repo add jetstack https://charts.jetstack.io
$ helm upgrade --install --create-namespace --namespace cert-manager --set installCRDs=true --set featureGates=ExperimentalGatewayAPISupport=true cert-manager jetstack/cert-manager
You should now have cert-manager
running with nothing to do:
$ kubectl wait --for=condition=Available deployment -n cert-manager --all
deployment.apps/cert-manager condition met
deployment.apps/cert-manager-cainjector condition met
deployment.apps/cert-manager-webhook condition met
$ kubectl get -n cert-manager deployment
NAME READY UP-TO-DATE AVAILABLE AGE
cert-manager 1/1 1 1 42m
cert-manager-cainjector 1/1 1 1 42m
cert-manager-webhook 1/1 1 1 42m
A Self-Signing Issuer
cert-manager can have any number of issuer configurations.
The simplest issuer type is SelfSigned.
It simply takes the certificate request and signs it with the private key it generates for the TLS Secret.
Self-signed certificates don't provide any help in establishing trust between certificates.
However, they are great for initial testing, due to their simplicity.
To install self-signing, run
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
EOF
Creating a TLS Gateway Listener
We now have to patch the example Gateway to reference cert-manager:
$ kubectl patch gateway/eg --patch '
metadata:
annotations:
cert-manager.io/cluster-issuer: selfsigned
cert-manager.io/common-name: "Hello World!"
spec:
listeners:
- name: https
protocol: HTTPS
hostname: www.example.com
port: 443
tls:
mode: Terminate
certificateRefs:
- kind: Secret
name: eg-https
' --type=merge
You could instead create a new Gateway serving HTTPS, if you’d prefer.
cert-manager doesn’t care, but we’ll keep it all together in this guide.
Nowadays, X.509 certificates don’t use the subject Common Name for hostname matching, so you can set it to whatever you want, or leave it empty.
The important parts here are
- the annotation referencing the “selfsigned” ClusterIssuer we created above,
- the
hostname
, which is required (but see #6051 for computing it based on attached HTTPRoutes), and - the named Secret, which is what cert-manager will create for us.
The annotations are documented in Supported Annotations.
Patching the Gateway makes cert-manager create a self-signed certificate within a few seconds.
Eventually, the Gateway becomes Programmed
again:
$ kubectl wait --for=condition=Programmed gateway/eg
gateway.gateway.networking.k8s.io/eg condition met
Testing The Gateway
See Testing in Secure Gateways for the general idea.
Since we have a self-signed certificate, curl
will by default reject it, requiring the -k
flag:
$ curl -kv -HHost:www.example.com https://127.0.0.1/get
...
* Server certificate:
* subject: CN=Hello World!
...
< HTTP/2 200
...
How cert-manager and Envoy Gateway Interact
This explains cert-manager Concepts in an Envoy Gateway context.
In the interaction between the two, cert-manager does all the heavy lifting.
It subscribes to changes to Gateway resources (using the gateway-shim
component.)
For any Gateway it finds, it looks for any TLS listeners, and the associated tls.certificateRefs
.
Note that while Gateway API supports multiple refs here, Envoy Gateway only uses one.
cert-manager also looks at the hostname
of the listener to figure out which hosts the certificate is expected to cover.
More than one listener can use the same certificate Secret, which means cert-manager needs to find all listeners using the same Secret before deciding what to do.
If the certificatRef
points to a valid certificate, given the hostnames found in listeners, cert-manager has nothing to do.
If there is no valid certificate, or it is about to expire, cert-manager’s gateway-shim
creates a Certificate resource, or updates the existing one.
cert-manager then follows the Certificate Lifecycle.
To know how to issue the certificate, an ClusterIssuer is configured, and referenced through annotations on the Gateway resource, which you did above.
Once a matching ClusterIssuer is found, that plugin does what needs to be done to acquire a signed certificate.
In the case of the ACME protocol (used by Let’s Encrypt,) cert-manager can also use an HTTP Gateway to solve the HTTP-01 challenge type.
This is the other side of cert-manager’s Gateway API support:
the ACME issuer creates a temporary HTTPRoute, lets the ACME server(s) query it, and deletes it again.
cert-manager then updates the Secret that the Gateway’s listener points to in tls.certificateRefs
.
Envoy Gateway picks up that the Secret has changed, and reloads the corresponding Envoy Proxy Deployments with the new private key and certificate.
As you can imagine, cert-manager requires quite broad permissions to update Secrets in any namespace, so the security-minded reader may want to look at the RBAC resources the Helm chart creates.
Using the ACME Issuer With Let’s Encrypt and HTTP-01
We will start using the Let’s Encrypt staging environment, to spare their production environment.
Our Gateway already contains an HTTP listener, so we will use that for the HTTP-01 challenges.
$ CERT_MANAGER_CONTACT_EMAIL=$(git config user.email) # Or whatever...
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: "$CERT_MANAGER_CONTACT_EMAIL"
privateKeySecretRef:
name: letsencrypt-staging-account-key
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- kind: Gateway
name: eg
namespace: default
EOF
The important parts are
- using
spec.acme
with a server URI and contact email address, and - referencing our plain HTTP gateway so the challenge HTTPRoute is attached to the right place.
Check the account registration process using the Ready condition:
$ kubectl wait --for=condition=Ready clusterissuer/letsencrypt-staging
$ kubectl describe clusterissuer/letsencrypt-staging
...
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/123456789
Conditions:
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
...
Now we’re ready to update the Gateway annotation to use this issuer instead:
$ kubectl annotate --overwrite gateway/eg cert-manager.io/cluster-issuer=letsencrypt-staging
The Gateway should be picked up by cert-manager, which will create a new certificate for you, and replace the Secret.
You should see a new CertificateRequest to track:
$ kubectl get certificaterequest
NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
eg-https-xxxxx True True selfsigned system:serviceaccount:cert-manager:cert-manager 42m
eg-https-xxxxx True True letsencrypt-staging system:serviceaccount:cert-manager:cert-manager 42m
Testing The Gateway
We still require the -k
flag, since the Let’s Encrypt staging environment CA is not widely trusted.
$ curl -kv -HHost:www.example.com https://127.0.0.1/get
...
* Server certificate:
* subject: CN=Hello World!
* issuer: C=US; O=(STAGING) Let's Encrypt; CN=(STAGING) Ersatz Edamame E1
...
< HTTP/2 200
...
Using The Let’s Encrypt Production Environment
Changing to the production environment is just a matter of replacing the server URI:
$ CERT_MANAGER_CONTACT_EMAIL=$(git config user.email) # Or whatever...
$ kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory # Removed "-staging".
email: "$CERT_MANAGER_CONTACT_EMAIL"
privateKeySecretRef:
name: letsencrypt-account-key # Removed "-staging".
solvers:
- http01:
gatewayHTTPRoute:
parentRefs:
- kind: Gateway
name: eg
namespace: default
EOF
And now you can update the Gateway listener to point to letsencrypt
instead:
$ kubectl annotate --overwrite gateway/eg cert-manager.io/cluster-issuer=letsencrypt
As before, track it by looking at CertificateRequests.
Testing The Gateway
Once the certificate has been replaced, we should finally be able to get rid of the -k
flag:
$ curl -v -HHost:www.example.com --resolve www.example.com:127.0.0.1 https://www.example.com/get
...
* Server certificate:
* subject: CN=Hello World!
* issuer: C=US; O=Let's Encrypt; CN=R3
...
< HTTP/2 200
...
Collecting Garbage
You probably want to set the cert-manager.io/revision-history-limit
annotation on your Gateway to make cert-manager prune the CertificateRequest history.
cert-manager deletes unused Certificate resources, and they are updated in-place when possible, so there should be no need for cleaning up Certificate resources.
The deletion is based on whether a Gateway still holds a tls.certificateRefs
that requires the Certificate.
If you remove a TLS listener from a Gateway, you may still have a Secret lingering.
cert-manager can clean it up using a flag.
Issuer Namespaces
We have used ClusterIssuer resources in this tutorial.
They are not bound to any namespace, and will read annotations from Gateways in any namespace.
You could also use Issuer, which is bound to a namespace.
This is useful e.g. if you want to use different ACME accounts for different namespaces.
If you change the issuer kind, you also need to change the annotation key from cert-manager.io/clusterissuer
to cert-manager.io/issuer
.
Using ExternalDNS
The ExternalDNS controller maintains DNS records based on Kubernetes resources.
Together with cert-manager, this can be used to fully automate hostname management.
It can use various source resources, among them Gateway Routes.
Just specify a Gateway Route resource, let ExternalDNS create the domain records, and then cert-manager the TLS certificate.
The tutorial on Gateway API uses kubectl.
They also have a Helm chart, which is easier to customize.
The only thing relevant to Envoy Gateway is to set the sources:
# values.yaml
sources:
- gateway-httproute
- gateway-grpcroute
- gateway-tcproute
- gateway-tlsroute
- gateway-udproute
Monitoring Progress / Troubleshooting
You can monitor progress in several ways:
The Issuer has a Ready condition (though this is rather boring for the selfSigned
type):
$ kubectl get issuer --all-namespaces
NAMESPACE NAME READY AGE
default selfsigned True 42m
The Gateway will say when it has an invalid certificate:
$ kubectl describe gateway/eg
...
Conditions:
Message: Secret default/eg-https does not exist.
Reason: InvalidCertificateRef
Status: False
Type: ResolvedRefs
...
Message: Listener is invalid, see other Conditions for details.
Reason: Invalid
Status: False
Type: Programmed
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BadConfig 42m cert-manager-gateway-shim Skipped a listener block: spec.listeners[1].hostname: Required value: the hostname cannot be empty
The main question is if cert-manager has picked up on the Gateway.
I.e., has it created a Certificate for it?
The above describe
contains an event from cert-manager-gateway-shim
telling you of one such issue.
Be aware that if you have a non-TLS listener in the Gateway, like we did, there will be events saying it is not eligible, which is of course expected.
Another option is looking at Deployment logs.
The cert-manager logs are not very verbose by default, but setting the Helm value global.logLevel
to 6 will enable all debug logs (the default is 2.)
This will make them verbose enough to say why a Gateway wasn’t considered (e.g. missing hostname
, or tls.mode
is not Terminate
.)
$ kubectl logs -n cert-manager deployment/cert-manager
...
Simply listing Certificate resources may be useful, even if it just gives a yes/no answer:
$ kubectl get certificate --all-namespaces
NAMESPACE NAME READY SECRET AGE
default eg-https True eg-https 42m
If there is a Certificate, then the gateway-shim
has recognized the Gateway.
But is there a CertificateRequest for it?
(BTW, don’t confuse this with a CertificateSigningRequest, which is a Kubernetes core resource type representing the same thing.)
$ kubectl get certificaterequest --all-namespaces
NAMESPACE NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
default eg-https-xxxxx True True selfsigned system:serviceaccount:cert-manager:cert-manager 42m
The ACME issuer also has Order
and Challenge
resources to watch:
$ kubectl get order --all-namespaces -o wide
NAME STATE ISSUER REASON AGE
order.acme.cert-manager.io/envoy-https-xxxxx-123456789 pending letsencrypt-staging 42m
$ kubectl get challenge --all-namespaces
NAME STATE DOMAIN AGE
challenge.acme.cert-manager.io/envoy-https-xxxxx-123456789-1234567890 pending www.example.com 42m
Using kubetctl get -o wide
or kubectl describe
on the Challenge will explain its state more.
$ kubectl get order --all-namespaces -o wide
NAME STATE ISSUER REASON AGE
order.acme.cert-manager.io/envoy-https-xxxxx-123456789 valid letsencrypt-staging 42m
Finally, since cert-manager creates the Secret referenced by the Gateway listener as its last step, we can also look for that:
$ kubectl get secret secret/eg-https
NAME TYPE DATA AGE
eg-https kubernetes.io/tls 3 42m
Clean Up
- Uninstall cert-manager:
helm uninstall --namespace cert-manager cert-manager
- Delete the
cert-manager
namespace: kubectl delete namespace/cert-manager
- Delete the
https
listener from gateway/eg
. - Delete
secret/eg-https
.
See Also
2.28 - Visualising metrics using Grafana
Envoy Gateway provides support for exposing Envoy Proxy metrics to a Prometheus instance.
This guide shows you how to visualise the metrics exposed to prometheus using grafana.
Prerequisites
Follow the steps from the Quickstart Guide to install Envoy Gateway and the example manifest.
Before proceeding, you should be able to query the example backend using HTTP.
Follow the steps from the Proxy Observability to enable prometheus metrics.
Prometheus is used to scrape metrics from the Envoy Proxy instances. Install Prometheus:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install prometheus prometheus-community/prometheus -n monitoring --create-namespace
Grafana is used to visualise the metrics exposed by the envoy proxy instances.
Install Grafana:
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm upgrade --install grafana grafana/grafana -f https://raw.githubusercontent.com/envoyproxy/gateway/v0.6.0/examples/grafana/helm-values.yaml -n monitoring --create-namespace
Expose endpoints:
GRAFANA_IP=$(kubectl get svc grafana -n monitoring -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
Connecting Grafana with Prometheus datasource
To visualise metrics from Prometheus, we have to connect Grafana with Prometheus. If you installed Grafana from the command
from prerequisites sections, the prometheus datasource should be already configured.
You can also add the data source manually by following the instructions from Grafana Docs.
Accessing Grafana
You can access the Grafana instance by visiting http://{GRAFANA_IP}
, derived in prerequisites.
To log in to Grafana, use the credentials admin:admin
.
Envoy Gateway has examples of dashboard for you to get started:
Envoy Proxy Global
Envoy Clusters
Envoy Pod Resources
You can load the above dashboards in your Grafana to get started. Please refer to Grafana docs for importing dashboards.
3 - Installation
This section includes installation related contents of Envoy Gateway.
3.1 - Install with Helm
Helm is a package manager for Kubernetes that automates the release and management of software on Kubernetes.
Envoy Gateway can be installed via a Helm chart with a few simple steps, depending on if you are deploying for the first time, upgrading Envoy Gateway from an existing installation, or migrating from Envoy Gateway.
Before you begin
The Envoy Gateway Helm chart is hosted by DockerHub.
It is published at oci://docker.io/envoyproxy/gateway-helm
.
Install with Helm
Envoy Gateway is typically deployed to Kubernetes from the command line. If you don’t have Kubernetes, you should use kind
to create one.
Install the Gateway API CRDs and Envoy Gateway:
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace
Wait for Envoy Gateway to become available:
kubectl wait --timeout=5m -n envoy-gateway-system deployment/envoy-gateway --for=condition=Available
Install the GatewayClass, Gateway, HTTPRoute and example app:
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/quickstart.yaml -n default
Note: quickstart.yaml
defines that Envoy Gateway will listen for
traffic on port 80 on its globally-routable IP address, to make it easy to use
browsers to test Envoy Gateway. When Envoy Gateway sees that its Listener is
using a privileged port (<1024), it will map this internally to an
unprivileged port, so that Envoy Gateway doesn’t need additional privileges.
It’s important to be aware of this mapping, since you may need to take it into
consideration when debugging.
Helm chart customizations
Some of the quick ways of using the helm install command for envoy gateway installation are below.
Increase the replicas
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace --set deployment.replicas=2
Change the kubernetesClusterDomain name
If you have installed your cluster with different domain name you can use below command.
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace --set kubernetesClusterDomain=<domain name>
Note: Above are some of the ways we can directly use for customization of our installation. But if you are looking for more complex changes values.yaml comes to rescue.
Using values.yaml file for complex installation
deployment:
envoyGateway:
resources:
limits:
cpu: 700m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
ports:
- name: grpc
port: 18005
targetPort: 18000
- name: ratelimit
port: 18006
targetPort: 18001
config:
envoyGateway:
logging:
level:
default: debug
Here we have made three changes to our values.yaml file. Increase the resources limit for cpu to 700m
, changed the port for grpc to 18005
and for ratelimit to 18006
and also updated the logging level to debug
.
You can use the below command to install the envoy gateway using values.yaml file.
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v0.0.0-latest -n envoy-gateway-system --create-namespace -f values.yaml
Helm Chart Values
If you want to know all the available fields inside the values.yaml file, please see the
Helm Chart Values.
Open Ports
These are the ports used by Envoy Gateway and the managed Envoy Proxy.
Envoy Gateway
Envoy Gateway | Address | Port | Configurable |
---|
Xds EnvoyProxy Server | 0.0.0.0 | 18000 | No |
Xds RateLimit Server | 0.0.0.0 | 18001 | No |
Admin Server | 127.0.0.1 | 19000 | Yes |
Metrics Server | 0.0.0.0 | 19001 | No |
Health Check | 127.0.0.1 | 8081 | No |
EnvoyProxy
Envoy Proxy | Address | Port |
---|
Admin Server | 127.0.0.1 | 19000 |
Heath Check | 0.0.0.0 | 19001 |
Next Steps
Envoy Gateway should now be successfully installed and running, but in order to experience more abilities of Envoy Gateway, you can refer to
User Guides.
3.2 - Install with Kubernetes YAML
In this guide, we’ll walk you through installing Envoy Gateway in your Kubernetes cluster.
The manual install process does not allow for as much control over configuration
as the Helm install method, so if you need more control over your Envoy Gateway
installation, it is recommended that you use helm.
Before you begin
Envoy Gateway is designed to run in Kubernetes for production. The most essential requirements are:
- Kubernetes 1.25 or later
- The
kubectl
command-line tool
Install with YAML
Envoy Gateway is typically deployed to Kubernetes from the command line. If you don’t have Kubernetes, you should use kind
to create one.
In your terminal, run the following command:
kubectl apply -f https://github.com/envoyproxy/gateway/releases/download/latest/install.yaml
Next Steps
Envoy Gateway should now be successfully installed and running, but in order to experience more abilities of Envoy Gateway, you can refer to User Guides.
3.3 - Install egctl
What is egctl?
egctl
is a command line tool to provide additional functionality for Envoy Gateway users.This guide shows how to install the egctl CLI. egctl can be installed either from source, or from pre-built binary releases.
From The Envoy Gateway Project
The Envoy Gateway project provides two ways to fetch and install egctl. These are the official methods to get egctl releases. Installation through those methods can be found below the official methods.
From the Binary Releases
Every release of egctl provides binary releases for a variety of OSes. These binary versions can be manually downloaded and installed.
- Download your desired version
- Unpack it (tar -zxvf egctl_latest_linux_amd64.tar.gz)
- Find the egctl binary in the unpacked directory, and move it to its desired destination (mv bin/linux/amd64/egctl /usr/local/bin/egctl)
From there, you should be able to run: egctl help
.
From Script
egctl
now has an installer script that will automatically grab the latest release version of egctl and install it locally.
You can fetch that script, and then execute it locally. It’s well documented so that you can read through it and understand what it is doing before you run it.
curl -fsSL -o get-egctl.sh https://gateway.envoyproxy.io/get-egctl.sh
chmod +x get-egctl.sh
# get help info of the
bash get-egctl.sh --help
# install the latest development version of egctl
bash VERSION=latest get-egctl.sh
Yes, you can just use the below command if you want to live on the edge.
curl -fsSL https://gateway.envoyproxy.io/get-egctl.sh | VERSION=latest bash
Next Steps
You can refer to
User Guides to more details about egctl.
3.4 - gateway-helm
The Helm chart for Envoy Gateway
Homepage: https://gateway.envoyproxy.io/
Maintainers
Source Code
Values
Key | Type | Default | Description |
---|
certgen.job.annotations | object | {} | |
certgen.job.ttlSecondsAfterFinished | int | 0 | |
certgen.rbac.annotations | object | {} | |
certgen.rbac.labels | object | {} | |
config.envoyGateway.gateway.controllerName | string | "gateway.envoyproxy.io/gatewayclass-controller" | |
config.envoyGateway.logging.level.default | string | "info" | |
config.envoyGateway.provider.type | string | "Kubernetes" | |
createNamespace | bool | false | |
deployment.envoyGateway.cert.expiryDays | int | 365 | |
deployment.envoyGateway.image.repository | string | "${ImageRepository}" | |
deployment.envoyGateway.image.tag | string | "${ImageTag}" | |
deployment.envoyGateway.imagePullPolicy | string | "Always" | |
deployment.envoyGateway.resources.limits.cpu | string | "500m" | |
deployment.envoyGateway.resources.limits.memory | string | "1024Mi" | |
deployment.envoyGateway.resources.requests.cpu | string | "100m" | |
deployment.envoyGateway.resources.requests.memory | string | "256Mi" | |
deployment.pod.annotations | object | {} | |
deployment.pod.labels | object | {} | |
deployment.ports[0].name | string | "grpc" | |
deployment.ports[0].port | int | 18000 | |
deployment.ports[0].targetPort | int | 18000 | |
deployment.ports[1].name | string | "ratelimit" | |
deployment.ports[1].port | int | 18001 | |
deployment.ports[1].targetPort | int | 18001 | |
deployment.replicas | int | 1 | |
envoyGatewayMetricsService.ports[0].name | string | "http" | |
envoyGatewayMetricsService.ports[0].port | int | 19001 | |
envoyGatewayMetricsService.ports[0].protocol | string | "TCP" | |
envoyGatewayMetricsService.ports[0].targetPort | int | 19001 | |
kubernetesClusterDomain | string | "cluster.local" | |
4 - API
This section includes APIs of Envoy Gateway.
4.1 - API Reference
Packages
gateway.envoyproxy.io/v1alpha1
Package v1alpha1 contains API schema definitions for the gateway.envoyproxy.io
API group.
Resource Types
BackendTrafficPolicy
BackendTrafficPolicy allows the user to configure the behavior of the connection between the downstream client and Envoy Proxy listener.
Appears in:
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | BackendTrafficPolicy |
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata . |
spec BackendTrafficPolicySpec | spec defines the desired state of BackendTrafficPolicy. |
BackendTrafficPolicyList
BackendTrafficPolicyList contains a list of BackendTrafficPolicy resources.
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | BackendTrafficPolicyList |
metadata ListMeta | Refer to Kubernetes API documentation for fields of metadata . |
items BackendTrafficPolicy array | |
BackendTrafficPolicySpec
spec defines the desired state of BackendTrafficPolicy.
Appears in:
Field | Description |
---|
targetRef PolicyTargetReferenceWithSectionName | targetRef is the name of the resource this policy is being attached to. This Policy and the TargetRef MUST be in the same namespace for this Policy to have effect and be applied to the Gateway. |
rateLimit RateLimitSpec | RateLimit allows the user to limit the number of incoming requests to a predefined value based on attributes within the traffic flow. |
loadBalancer LoadBalancer | LoadBalancer policy to apply when routing traffic from the gateway to the backend endpoints |
BootstrapType
Underlying type: string
BootstrapType defines the types of bootstrap supported by Envoy Gateway.
Appears in:
CORS
CORS defines the configuration for Cross-Origin Resource Sharing (CORS).
Appears in:
Field | Description |
---|
allowOrigins StringMatch array | AllowOrigins defines the origins that are allowed to make requests. |
allowMethods string array | AllowMethods defines the methods that are allowed to make requests. |
allowHeaders string array | AllowHeaders defines the headers that are allowed to be sent with requests. |
exposeHeaders string array | ExposeHeaders defines the headers that can be exposed in the responses. |
maxAge Duration | MaxAge defines how long the results of a preflight request can be cached. |
ClaimToHeader defines a configuration to convert JWT claims into HTTP headers
Appears in:
Field | Description |
---|
header string | Header defines the name of the HTTP request header that the JWT Claim will be saved into. |
claim string | Claim is the JWT Claim that should be saved into the header : it can be a nested claim of type (eg. “claim.nested.key”, “sub”). The nested claim name must use dot “.” to separate the JSON name path. |
ClientTrafficPolicy
ClientTrafficPolicy allows the user to configure the behavior of the connection between the downstream client and Envoy Proxy listener.
Appears in:
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | ClientTrafficPolicy |
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata . |
spec ClientTrafficPolicySpec | Spec defines the desired state of ClientTrafficPolicy. |
ClientTrafficPolicyList
ClientTrafficPolicyList contains a list of ClientTrafficPolicy resources.
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | ClientTrafficPolicyList |
metadata ListMeta | Refer to Kubernetes API documentation for fields of metadata . |
items ClientTrafficPolicy array | |
ClientTrafficPolicySpec
ClientTrafficPolicySpec defines the desired state of ClientTrafficPolicy.
Appears in:
Field | Description |
---|
targetRef PolicyTargetReferenceWithSectionName | TargetRef is the name of the Gateway resource this policy is being attached to. This Policy and the TargetRef MUST be in the same namespace for this Policy to have effect and be applied to the Gateway. TargetRef |
tcpKeepalive TCPKeepalive | TcpKeepalive settings associated with the downstream client connection. If defined, sets SO_KEEPALIVE on the listener socket to enable TCP Keepalives. Disabled by default. |
ConsistentHash
ConsistentHash defines the configuration related to the consistent hash load balancer policy
Appears in:
ConsistentHashType
Underlying type: string
ConsistentHashType defines the type of input to hash on.
Appears in:
CustomTag
Appears in:
Field | Description |
---|
type CustomTagType | Type defines the type of custom tag. |
literal LiteralCustomTag | Literal adds hard-coded value to each span. It’s required when the type is “Literal”. |
environment EnvironmentCustomTag | Environment adds value from environment variable to each span. It’s required when the type is “Environment”. |
requestHeader RequestHeaderCustomTag | RequestHeader adds value from request header to each span. It’s required when the type is “RequestHeader”. |
CustomTagType
Underlying type: string
Appears in:
EnvironmentCustomTag
EnvironmentCustomTag adds value from environment variable to each span.
Appears in:
Field | Description |
---|
name string | Name defines the name of the environment variable which to extract the value from. |
defaultValue string | DefaultValue defines the default value to use if the environment variable is not set. |
EnvoyGateway
EnvoyGateway is the schema for the envoygateways API.
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | EnvoyGateway |
gateway Gateway | Gateway defines desired Gateway API specific configuration. If unset, default configuration parameters will apply. |
provider EnvoyGatewayProvider | Provider defines the desired provider and provider-specific configuration. If unspecified, the Kubernetes provider is used with default configuration parameters. |
logging EnvoyGatewayLogging | Logging defines logging parameters for Envoy Gateway. |
admin EnvoyGatewayAdmin | Admin defines the desired admin related abilities. If unspecified, the Admin is used with default configuration parameters. |
telemetry EnvoyGatewayTelemetry | Telemetry defines the desired control plane telemetry related abilities. If unspecified, the telemetry is used with default configuration. |
rateLimit RateLimit | RateLimit defines the configuration associated with the Rate Limit service deployed by Envoy Gateway required to implement the Global Rate limiting functionality. The specific rate limit service used here is the reference implementation in Envoy. For more details visit https://github.com/envoyproxy/ratelimit. This configuration is unneeded for “Local” rate limiting. |
extensionManager ExtensionManager | ExtensionManager defines an extension manager to register for the Envoy Gateway Control Plane. |
extensionApis ExtensionAPISettings | ExtensionAPIs defines the settings related to specific Gateway API Extensions implemented by Envoy Gateway |
EnvoyGatewayAdmin
EnvoyGatewayAdmin defines the Envoy Gateway Admin configuration.
Appears in:
Field | Description |
---|
address EnvoyGatewayAdminAddress | Address defines the address of Envoy Gateway Admin Server. |
enableDumpConfig boolean | EnableDumpConfig defines if enable dump config in Envoy Gateway logs. |
enablePprof boolean | EnablePprof defines if enable pprof in Envoy Gateway Admin Server. |
EnvoyGatewayAdminAddress
EnvoyGatewayAdminAddress defines the Envoy Gateway Admin Address configuration.
Appears in:
Field | Description |
---|
port integer | Port defines the port the admin server is exposed on. |
host string | Host defines the admin server hostname. |
EnvoyGatewayCustomProvider
EnvoyGatewayCustomProvider defines configuration for the Custom provider.
Appears in:
Field | Description |
---|
resource EnvoyGatewayResourceProvider | Resource defines the desired resource provider. This provider is used to specify the provider to be used to retrieve the resource configurations such as Gateway API resources |
infrastructure EnvoyGatewayInfrastructureProvider | Infrastructure defines the desired infrastructure provider. This provider is used to specify the provider to be used to provide an environment to deploy the out resources like the Envoy Proxy data plane. |
EnvoyGatewayFileResourceProvider
EnvoyGatewayFileResourceProvider defines configuration for the File Resource provider.
Appears in:
Field | Description |
---|
paths string array | Paths are the paths to a directory or file containing the resource configuration. Recursive sub directories are not currently supported. |
EnvoyGatewayHostInfrastructureProvider
EnvoyGatewayHostInfrastructureProvider defines configuration for the Host Infrastructure provider.
Appears in:
EnvoyGatewayInfrastructureProvider
EnvoyGatewayInfrastructureProvider defines configuration for the Custom Infrastructure provider.
Appears in:
EnvoyGatewayKubernetesProvider
EnvoyGatewayKubernetesProvider defines configuration for the Kubernetes provider.
Appears in:
Field | Description |
---|
rateLimitDeployment KubernetesDeploymentSpec | RateLimitDeployment defines the desired state of the Envoy ratelimit deployment resource. If unspecified, default settings for the managed Envoy ratelimit deployment resource are applied. |
watch KubernetesWatchMode | Watch holds configuration of which input resources should be watched and reconciled. |
deploy KubernetesDeployMode | Deploy holds configuration of how output managed resources such as the Envoy Proxy data plane should be deployed |
overwrite_control_plane_certs boolean | OverwriteControlPlaneCerts updates the secrets containing the control plane certs, when set. |
EnvoyGatewayLogComponent
Underlying type: string
EnvoyGatewayLogComponent defines a component that supports a configured logging level.
Appears in:
EnvoyGatewayLogging
EnvoyGatewayLogging defines logging for Envoy Gateway.
Appears in:
Field | Description |
---|
level object (keys:EnvoyGatewayLogComponent, values:LogLevel) | Level is the logging level. If unspecified, defaults to “info”. EnvoyGatewayLogComponent options: default/provider/gateway-api/xds-translator/xds-server/infrastructure/global-ratelimit. LogLevel options: debug/info/error/warn. |
EnvoyGatewayMetricSink
EnvoyGatewayMetricSink defines control plane metric sinks where metrics are sent to.
Appears in:
Field | Description |
---|
type MetricSinkType | Type defines the metric sink type. EG control plane currently supports OpenTelemetry. |
openTelemetry EnvoyGatewayOpenTelemetrySink | OpenTelemetry defines the configuration for OpenTelemetry sink. It’s required if the sink type is OpenTelemetry. |
EnvoyGatewayMetrics
EnvoyGatewayMetrics defines control plane push/pull metrics configurations.
Appears in:
EnvoyGatewayOpenTelemetrySink
Appears in:
Field | Description |
---|
host string | Host define the sink service hostname. |
protocol string | Protocol define the sink service protocol. |
port integer | Port defines the port the sink service is exposed on. |
EnvoyGatewayPrometheusProvider
EnvoyGatewayPrometheusProvider will expose prometheus endpoint in pull mode.
Appears in:
Field | Description |
---|
disable boolean | Disable defines if disables the prometheus metrics in pull mode. |
EnvoyGatewayProvider
EnvoyGatewayProvider defines the desired configuration of a provider.
Appears in:
Field | Description |
---|
type ProviderType | Type is the type of provider to use. Supported types are “Kubernetes”. |
kubernetes EnvoyGatewayKubernetesProvider | Kubernetes defines the configuration of the Kubernetes provider. Kubernetes provides runtime configuration via the Kubernetes API. |
custom EnvoyGatewayCustomProvider | Custom defines the configuration for the Custom provider. This provider allows you to define a specific resource provider and a infrastructure provider. |
EnvoyGatewayResourceProvider
EnvoyGatewayResourceProvider defines configuration for the Custom Resource provider.
Appears in:
Field | Description |
---|
type ResourceProviderType | Type is the type of resource provider to use. Supported types are “File”. |
file EnvoyGatewayFileResourceProvider | File defines the configuration of the File provider. File provides runtime configuration defined by one or more files. |
EnvoyGatewaySpec
EnvoyGatewaySpec defines the desired state of Envoy Gateway.
Appears in:
Field | Description |
---|
gateway Gateway | Gateway defines desired Gateway API specific configuration. If unset, default configuration parameters will apply. |
provider EnvoyGatewayProvider | Provider defines the desired provider and provider-specific configuration. If unspecified, the Kubernetes provider is used with default configuration parameters. |
logging EnvoyGatewayLogging | Logging defines logging parameters for Envoy Gateway. |
admin EnvoyGatewayAdmin | Admin defines the desired admin related abilities. If unspecified, the Admin is used with default configuration parameters. |
telemetry EnvoyGatewayTelemetry | Telemetry defines the desired control plane telemetry related abilities. If unspecified, the telemetry is used with default configuration. |
rateLimit RateLimit | RateLimit defines the configuration associated with the Rate Limit service deployed by Envoy Gateway required to implement the Global Rate limiting functionality. The specific rate limit service used here is the reference implementation in Envoy. For more details visit https://github.com/envoyproxy/ratelimit. This configuration is unneeded for “Local” rate limiting. |
extensionManager ExtensionManager | ExtensionManager defines an extension manager to register for the Envoy Gateway Control Plane. |
extensionApis ExtensionAPISettings | ExtensionAPIs defines the settings related to specific Gateway API Extensions implemented by Envoy Gateway |
EnvoyGatewayTelemetry
EnvoyGatewayTelemetry defines telemetry configurations for envoy gateway control plane. Control plane will focus on metrics observability telemetry and tracing telemetry later.
Appears in:
Field | Description |
---|
metrics EnvoyGatewayMetrics | Metrics defines metrics configuration for envoy gateway. |
EnvoyJSONPatchConfig
EnvoyJSONPatchConfig defines the configuration for patching a Envoy xDS Resource using JSONPatch semantic
Appears in:
Field | Description |
---|
type EnvoyResourceType | Type is the typed URL of the Envoy xDS Resource |
name string | Name is the name of the resource |
operation JSONPatchOperation | Patch defines the JSON Patch Operation |
EnvoyPatchPolicy
EnvoyPatchPolicy allows the user to modify the generated Envoy xDS resources by Envoy Gateway using this patch API
Appears in:
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | EnvoyPatchPolicy |
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata . |
spec EnvoyPatchPolicySpec | Spec defines the desired state of EnvoyPatchPolicy. |
EnvoyPatchPolicyList
EnvoyPatchPolicyList contains a list of EnvoyPatchPolicy resources.
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | EnvoyPatchPolicyList |
metadata ListMeta | Refer to Kubernetes API documentation for fields of metadata . |
items EnvoyPatchPolicy array | |
EnvoyPatchPolicySpec
EnvoyPatchPolicySpec defines the desired state of EnvoyPatchPolicy.
Appears in:
Field | Description |
---|
type EnvoyPatchType | Type decides the type of patch. Valid EnvoyPatchType values are “JSONPatch”. |
jsonPatches EnvoyJSONPatchConfig array | JSONPatch defines the JSONPatch configuration. |
targetRef PolicyTargetReference | TargetRef is the name of the Gateway API resource this policy is being attached to. Currently only attaching to Gateway is supported This Policy and the TargetRef MUST be in the same namespace for this Policy to have effect and be applied to the Gateway TargetRef |
priority integer | Priority of the EnvoyPatchPolicy. If multiple EnvoyPatchPolicies are applied to the same TargetRef, they will be applied in the ascending order of the priority i.e. int32.min has the highest priority and int32.max has the lowest priority. Defaults to 0. |
EnvoyPatchType
Underlying type: string
EnvoyPatchType specifies the types of Envoy patching mechanisms.
Appears in:
EnvoyProxy
EnvoyProxy is the schema for the envoyproxies API.
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | EnvoyProxy |
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata . |
spec EnvoyProxySpec | EnvoyProxySpec defines the desired state of EnvoyProxy. |
EnvoyProxyKubernetesProvider
EnvoyProxyKubernetesProvider defines configuration for the Kubernetes resource provider.
Appears in:
Field | Description |
---|
envoyDeployment KubernetesDeploymentSpec | EnvoyDeployment defines the desired state of the Envoy deployment resource. If unspecified, default settings for the managed Envoy deployment resource are applied. |
envoyService KubernetesServiceSpec | EnvoyService defines the desired state of the Envoy service resource. If unspecified, default settings for the managed Envoy service resource are applied. |
EnvoyProxyProvider
EnvoyProxyProvider defines the desired state of a resource provider.
Appears in:
Field | Description |
---|
type ProviderType | Type is the type of resource provider to use. A resource provider provides infrastructure resources for running the data plane, e.g. Envoy proxy, and optional auxiliary control planes. Supported types are “Kubernetes”. |
kubernetes EnvoyProxyKubernetesProvider | Kubernetes defines the desired state of the Kubernetes resource provider. Kubernetes provides infrastructure resources for running the data plane, e.g. Envoy proxy. If unspecified and type is “Kubernetes”, default settings for managed Kubernetes resources are applied. |
EnvoyProxySpec
EnvoyProxySpec defines the desired state of EnvoyProxy.
Appears in:
Field | Description |
---|
provider EnvoyProxyProvider | Provider defines the desired resource provider and provider-specific configuration. If unspecified, the “Kubernetes” resource provider is used with default configuration parameters. |
logging ProxyLogging | Logging defines logging parameters for managed proxies. |
telemetry ProxyTelemetry | Telemetry defines telemetry parameters for managed proxies. |
bootstrap ProxyBootstrap | Bootstrap defines the Envoy Bootstrap as a YAML string. Visit https://www.envoyproxy.io/docs/envoy/latest/api-v3/config/bootstrap/v3/bootstrap.proto#envoy-v3-api-msg-config-bootstrap-v3-bootstrap to learn more about the syntax. If set, this is the Bootstrap configuration used for the managed Envoy Proxy fleet instead of the default Bootstrap configuration set by Envoy Gateway. Some fields within the Bootstrap that are required to communicate with the xDS Server (Envoy Gateway) and receive xDS resources from it are not configurable and will result in the EnvoyProxy resource being rejected. Backward compatibility across minor versions is not guaranteed. We strongly recommend using egctl x translate to generate a EnvoyProxy resource with the Bootstrap field set to the default Bootstrap configuration used. You can edit this configuration, and rerun egctl x translate to ensure there are no validation errors. |
concurrency integer | Concurrency defines the number of worker threads to run. If unset, it defaults to the number of cpuset threads on the platform. |
mergeGateways boolean | MergeGateways defines if Gateway resources should be merged onto the same Envoy Proxy Infrastructure. Setting this field to true would merge all Gateway Listeners under the parent Gateway Class. This means that the port, protocol and hostname tuple must be unique for every listener. If a duplicate listener is detected, the newer listener (based on timestamp) will be rejected and its status will be updated with a “Accepted=False” condition. |
EnvoyResourceType
Underlying type: string
EnvoyResourceType specifies the type URL of the Envoy resource.
Appears in:
ExtensionAPISettings
ExtensionAPISettings defines the settings specific to Gateway API Extensions.
Appears in:
Field | Description |
---|
enableEnvoyPatchPolicy boolean | EnableEnvoyPatchPolicy enables Envoy Gateway to reconcile and implement the EnvoyPatchPolicy resources. |
ExtensionHooks
ExtensionHooks defines extension hooks across all supported runners
Appears in:
Field | Description |
---|
xdsTranslator XDSTranslatorHooks | XDSTranslator defines all the supported extension hooks for the xds-translator runner |
ExtensionManager
ExtensionManager defines the configuration for registering an extension manager to the Envoy Gateway control plane.
Appears in:
Field | Description |
---|
resources GroupVersionKind array | Resources defines the set of K8s resources the extension will handle. |
hooks ExtensionHooks | Hooks defines the set of hooks the extension supports |
service ExtensionService | Service defines the configuration of the extension service that the Envoy Gateway Control Plane will call through extension hooks. |
ExtensionService
ExtensionService defines the configuration for connecting to a registered extension service.
Appears in:
Field | Description |
---|
host string | Host define the extension service hostname. |
port integer | Port defines the port the extension service is exposed on. |
tls ExtensionTLS | TLS defines TLS configuration for communication between Envoy Gateway and the extension service. |
ExtensionTLS
ExtensionTLS defines the TLS configuration when connecting to an extension service
Appears in:
Field | Description |
---|
certificateRef SecretObjectReference | CertificateRef contains a references to objects (Kubernetes objects or otherwise) that contains a TLS certificate and private keys. These certificates are used to establish a TLS handshake to the extension server. |
CertificateRef can only reference a Kubernetes Secret at this time. | |
FileEnvoyProxyAccessLog
Appears in:
Field | Description |
---|
path string | Path defines the file path used to expose envoy access log(e.g. /dev/stdout). |
Gateway
Gateway defines the desired Gateway API configuration of Envoy Gateway.
Appears in:
GlobalRateLimit
GlobalRateLimit defines global rate limit configuration.
Appears in:
Field | Description |
---|
rules RateLimitRule array | Rules are a list of RateLimit selectors and limits. Each rule and its associated limit is applied in a mutually exclusive way i.e. if multiple rules get selected, each of their associated limits get applied, so a single traffic request might increase the rate limit counters for multiple rules if selected. |
GroupVersionKind
GroupVersionKind unambiguously identifies a Kind. It can be converted to k8s.io/apimachinery/pkg/runtime/schema.GroupVersionKind
Appears in:
Field | Description |
---|
group string | |
version string | |
kind string | |
HeaderMatch defines the match attributes within the HTTP Headers of the request.
Appears in:
InfrastructureProviderType
Underlying type: string
InfrastructureProviderType defines the types of custom infrastructure providers supported by Envoy Gateway.
Appears in:
JSONPatchOperation
JSONPatchOperation defines the JSON Patch Operation as defined in https://datatracker.ietf.org/doc/html/rfc6902
Appears in:
JSONPatchOperationType
Underlying type: string
JSONPatchOperationType specifies the JSON Patch operations that can be performed.
Appears in:
JWT
JWT defines the configuration for JSON Web Token (JWT) authentication.
Appears in:
JWTProvider
JWTProvider defines how a JSON Web Token (JWT) can be verified.
Appears in:
Field | Description |
---|
name string | Name defines a unique name for the JWT provider. A name can have a variety of forms, including RFC1123 subdomains, RFC 1123 labels, or RFC 1035 labels. |
issuer string | Issuer is the principal that issued the JWT and takes the form of a URL or email address. For additional details, see https://tools.ietf.org/html/rfc7519#section-4.1.1 for URL format and https://rfc-editor.org/rfc/rfc5322.html for email format. If not provided, the JWT issuer is not checked. |
audiences string array | Audiences is a list of JWT audiences allowed access. For additional details, see https://tools.ietf.org/html/rfc7519#section-4.1.3. If not provided, JWT audiences are not checked. |
remoteJWKS RemoteJWKS | RemoteJWKS defines how to fetch and cache JSON Web Key Sets (JWKS) from a remote HTTP/HTTPS endpoint. |
claimToHeaders ClaimToHeader array | ClaimToHeaders is a list of JWT claims that must be extracted into HTTP request headers For examples, following config: The claim must be of type; string, int, double, bool. Array type claims are not supported |
KubernetesContainerSpec
KubernetesContainerSpec defines the desired state of the Kubernetes container resource.
Appears in:
KubernetesDeployMode
KubernetesDeployMode holds configuration for how to deploy managed resources such as the Envoy Proxy data plane fleet.
Appears in:
KubernetesDeploymentSpec
KubernetesDeploymentSpec defines the desired state of the Kubernetes deployment resource.
Appears in:
KubernetesPodSpec
KubernetesPodSpec defines the desired state of the Kubernetes pod resource.
Appears in:
Field | Description |
---|
annotations object (keys:string, values:string) | Annotations are the annotations that should be appended to the pods. By default, no pod annotations are appended. |
labels object (keys:string, values:string) | Labels are the additional labels that should be tagged to the pods. By default, no additional pod labels are tagged. |
securityContext PodSecurityContext | SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field. |
affinity Affinity | If specified, the pod’s scheduling constraints. |
tolerations Toleration array | If specified, the pod’s tolerations. |
volumes Volume array | Volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes |
KubernetesServiceSpec
KubernetesServiceSpec defines the desired state of the Kubernetes service resource.
Appears in:
Field | Description |
---|
annotations object (keys:string, values:string) | Annotations that should be appended to the service. By default, no annotations are appended. |
type ServiceType | Type determines how the Service is exposed. Defaults to LoadBalancer. Valid options are ClusterIP, LoadBalancer and NodePort. “LoadBalancer” means a service will be exposed via an external load balancer (if the cloud provider supports it). “ClusterIP” means a service will only be accessible inside the cluster, via the cluster IP. “NodePort” means a service will be exposed on a static Port on all Nodes of the cluster. |
loadBalancerClass string | LoadBalancerClass, when specified, allows for choosing the LoadBalancer provider implementation if more than one are available or is otherwise expected to be specified |
allocateLoadBalancerNodePorts boolean | AllocateLoadBalancerNodePorts defines if NodePorts will be automatically allocated for services with type LoadBalancer. Default is “true”. It may be set to “false” if the cluster load-balancer does not rely on NodePorts. If the caller requests specific NodePorts (by specifying a value), those requests will be respected, regardless of this field. This field may only be set for services with type LoadBalancer and will be cleared if the type is changed to any other type. |
loadBalancerIP string | LoadBalancerIP defines the IP Address of the underlying load balancer service. This field may be ignored if the load balancer provider does not support this feature. This field has been deprecated in Kubernetes, but it is still used for setting the IP Address in some cloud providers such as GCP. |
KubernetesWatchMode
KubernetesWatchMode holds the configuration for which input resources to watch and reconcile.
Appears in:
Field | Description |
---|
Type KubernetesWatchModeType | Type indicates what watch mode to use. KubernetesWatchModeTypeNamespaces and KubernetesWatchModeTypeNamespaceSelectors are currently supported By default, when this field is unset or empty, Envoy Gateway will watch for input namespaced resources from all namespaces. |
Namespaces string array | Namespaces holds the list of namespaces that Envoy Gateway will watch for namespaced scoped resources such as Gateway, HTTPRoute and Service. Note that Envoy Gateway will continue to reconcile relevant cluster scoped resources such as GatewayClass that it is linked to. Precisely one of Namespaces and NamespaceSelectors must be set |
namespaces string array | NamespaceSelectors holds a list of labels that namespaces have to have in order to be watched. Note this doesn’t set the informer to watch the namespaces with the given labels. Informer still watches all namespaces. But the events for objects whois namespce have no given labels will be filtered out. Precisely one of Namespaces and NamespaceSelectors must be set |
KubernetesWatchModeType
Underlying type: string
KubernetesWatchModeType defines the type of KubernetesWatchMode
Appears in:
LiteralCustomTag
LiteralCustomTag adds hard-coded value to each span.
Appears in:
Field | Description |
---|
value string | Value defines the hard-coded value to add to each span. |
LoadBalancer
LoadBalancer defines the load balancer policy to be applied.
Appears in:
Field | Description |
---|
type LoadBalancerType | Type decides the type of Load Balancer policy. Valid LoadBalancerType values are “ConsistentHash”, “LeastRequest”, “Random”, “RoundRobin”, |
consistentHash ConsistentHash | ConsistentHash defines the configuration when the load balancer type is set to ConsistentHash |
LoadBalancerType
Underlying type: string
LoadBalancerType specifies the types of LoadBalancer.
Appears in:
LogLevel
Underlying type: string
LogLevel defines a log level for Envoy Gateway and EnvoyProxy system logs.
Appears in:
Match
Match defines the stats match configuration.
Appears in:
Field | Description |
---|
type MatcherType | MatcherType defines the stats matcher type |
value string | |
MatchType
Underlying type: string
MatchType specifies the semantics of how a string value should be compared. Valid MatchType values are “Exact”, “Prefix”, “Suffix”, “RegularExpression”.
Appears in:
MatcherType
Underlying type: string
Appears in:
MetricSinkType
Underlying type: string
Appears in:
OpenTelemetryEnvoyProxyAccessLog
TODO: consider reuse ExtensionService?
Appears in:
Field | Description |
---|
host string | Host define the extension service hostname. |
port integer | Port defines the port the extension service is exposed on. |
resources object (keys:string, values:string) | Resources is a set of labels that describe the source of a log entry, including envoy node info. It’s recommended to follow semantic conventions. |
ProviderType
Underlying type: string
ProviderType defines the types of providers supported by Envoy Gateway.
Appears in:
ProxyAccessLog
Appears in:
Field | Description |
---|
disable boolean | Disable disables access logging for managed proxies if set to true. |
settings ProxyAccessLogSetting array | Settings defines accesslog settings for managed proxies. If unspecified, will send default format to stdout. |
ProxyAccessLogFormat defines the format of accesslog. By default accesslogs are written to standard output.
Appears in:
Field | Description |
---|
type ProxyAccessLogFormatType | Type defines the type of accesslog format. |
text string | Text defines the text accesslog format, following Envoy accesslog formatting, It’s required when the format type is “Text”. Envoy command operators may be used in the format. The format string documentation provides more information. |
json object (keys:string, values:string) | JSON is additional attributes that describe the specific event occurrence. Structured format for the envoy access logs. Envoy command operators can be used as values for fields within the Struct. It’s required when the format type is “JSON”. |
Underlying type: string
Appears in:
ProxyAccessLogSetting
Appears in:
ProxyAccessLogSink
Appears in:
ProxyAccessLogSinkType
Underlying type: string
Appears in:
ProxyBootstrap
ProxyBootstrap defines Envoy Bootstrap configuration.
Appears in:
Field | Description |
---|
type BootstrapType | Type is the type of the bootstrap configuration, it should be either Replace or Merge. If unspecified, it defaults to Replace. |
value string | Value is a YAML string of the bootstrap. |
ProxyLogComponent
Underlying type: string
ProxyLogComponent defines a component that supports a configured logging level.
Appears in:
ProxyLogging
ProxyLogging defines logging parameters for managed proxies.
Appears in:
Field | Description |
---|
level object (keys:ProxyLogComponent, values:LogLevel) | Level is a map of logging level per component, where the component is the key and the log level is the value. If unspecified, defaults to “default: warn”. |
ProxyMetricSink
Appears in:
Field | Description |
---|
type MetricSinkType | Type defines the metric sink type. EG currently only supports OpenTelemetry. |
openTelemetry ProxyOpenTelemetrySink | OpenTelemetry defines the configuration for OpenTelemetry sink. It’s required if the sink type is OpenTelemetry. |
ProxyMetrics
Appears in:
Field | Description |
---|
prometheus ProxyPrometheusProvider | Prometheus defines the configuration for Admin endpoint /stats/prometheus . |
sinks ProxyMetricSink array | Sinks defines the metric sinks where metrics are sent to. |
matches Match array | Matches defines configuration for selecting specific metrics instead of generating all metrics stats that are enabled by default. This helps reduce CPU and memory overhead in Envoy, but eliminating some stats may after critical functionality. Here are the stats that we strongly recommend not disabling: cluster_manager.warming_clusters , cluster.<cluster_name>.membership_total ,cluster.<cluster_name>.membership_healthy , cluster.<cluster_name>.membership_degraded ,reference https://github.com/envoyproxy/envoy/issues/9856, https://github.com/envoyproxy/envoy/issues/14610 |
enableVirtualHostStats boolean | EnableVirtualHostStats enables envoy stat metrics for virtual hosts. |
ProxyOpenTelemetrySink
Appears in:
Field | Description |
---|
host string | Host define the service hostname. |
port integer | Port defines the port the service is exposed on. |
ProxyPrometheusProvider
Appears in:
Field | Description |
---|
disable boolean | Disable the Prometheus endpoint. |
ProxyTelemetry
Appears in:
Field | Description |
---|
accessLog ProxyAccessLog | AccessLogs defines accesslog parameters for managed proxies. If unspecified, will send default format to stdout. |
tracing ProxyTracing | Tracing defines tracing configuration for managed proxies. If unspecified, will not send tracing data. |
metrics ProxyMetrics | Metrics defines metrics configuration for managed proxies. |
ProxyTracing
Appears in:
Field | Description |
---|
samplingRate integer | SamplingRate controls the rate at which traffic will be selected for tracing if no prior sampling decision has been made. Defaults to 100, valid values [0-100]. 100 indicates 100% sampling. |
customTags object (keys:string, values:CustomTag) | CustomTags defines the custom tags to add to each span. If provider is kubernetes, pod name and namespace are added by default. |
provider TracingProvider | Provider defines the tracing provider. Only OpenTelemetry is supported currently. |
RateLimit
RateLimit defines the configuration associated with the Rate Limit Service used for Global Rate Limiting.
Appears in:
Field | Description |
---|
backend RateLimitDatabaseBackend | Backend holds the configuration associated with the database backend used by the rate limit service to store state associated with global ratelimiting. |
timeout Duration | Timeout specifies the timeout period for the proxy to access the ratelimit server If not set, timeout is 20ms. |
failClosed boolean | FailClosed is a switch used to control the flow of traffic when the response from the ratelimit server cannot be obtained. If FailClosed is false, let the traffic pass, otherwise, don’t let the traffic pass and return 500. If not set, FailClosed is False. |
RateLimitDatabaseBackend
RateLimitDatabaseBackend defines the configuration associated with the database backend used by the rate limit service.
Appears in:
RateLimitDatabaseBackendType
Underlying type: string
RateLimitDatabaseBackendType specifies the types of database backend to be used by the rate limit service.
Appears in:
RateLimitRedisSettings
RateLimitRedisSettings defines the configuration for connecting to redis database.
Appears in:
Field | Description |
---|
url string | URL of the Redis Database. |
tls RedisTLSSettings | TLS defines TLS configuration for connecting to redis database. |
RateLimitRule
RateLimitRule defines the semantics for matching attributes from the incoming requests, and setting limits for them.
Appears in:
Field | Description |
---|
clientSelectors RateLimitSelectCondition array | ClientSelectors holds the list of select conditions to select specific clients using attributes from the traffic flow. All individual select conditions must hold True for this rule and its limit to be applied. If this field is empty, it is equivalent to True, and the limit is applied. |
limit RateLimitValue | Limit holds the rate limit values. This limit is applied for traffic flows when the selectors compute to True, causing the request to be counted towards the limit. The limit is enforced and the request is ratelimited, i.e. a response with 429 HTTP status code is sent back to the client when the selected requests have reached the limit. |
RateLimitSelectCondition
RateLimitSelectCondition specifies the attributes within the traffic flow that can be used to select a subset of clients to be ratelimited. All the individual conditions must hold True for the overall condition to hold True.
Appears in:
Field | Description |
---|
headers HeaderMatch array | Headers is a list of request headers to match. Multiple header values are ANDed together, meaning, a request MUST match all the specified headers. |
sourceCIDR SourceMatch | SourceCIDR is the client IP Address range to match on. |
RateLimitSpec
RateLimitSpec defines the desired state of RateLimitSpec.
Appears in:
Field | Description |
---|
type RateLimitType | Type decides the scope for the RateLimits. Valid RateLimitType values are “Global”. |
global GlobalRateLimit | Global defines global rate limit configuration. |
RateLimitType
Underlying type: string
RateLimitType specifies the types of RateLimiting.
Appears in:
RateLimitUnit
Underlying type: string
RateLimitUnit specifies the intervals for setting rate limits. Valid RateLimitUnit values are “Second”, “Minute”, “Hour”, and “Day”.
Appears in:
RateLimitValue
RateLimitValue defines the limits for rate limiting.
Appears in:
RedisTLSSettings
RedisTLSSettings defines the TLS configuration for connecting to redis database.
Appears in:
Field | Description |
---|
certificateRef SecretObjectReference | CertificateRef defines the client certificate reference for TLS connections. Currently only a Kubernetes Secret of type TLS is supported. |
RemoteJWKS
RemoteJWKS defines how to fetch and cache JSON Web Key Sets (JWKS) from a remote HTTP/HTTPS endpoint.
Appears in:
Field | Description |
---|
uri string | URI is the HTTPS URI to fetch the JWKS. Envoy’s system trust bundle is used to validate the server certificate. |
RequestHeaderCustomTag adds value from request header to each span.
Appears in:
Field | Description |
---|
name string | Name defines the name of the request header which to extract the value from. |
defaultValue string | DefaultValue defines the default value to use if the request header is not set. |
ResourceProviderType
Underlying type: string
ResourceProviderType defines the types of custom resource providers supported by Envoy Gateway.
Appears in:
SecurityPolicy
SecurityPolicy allows the user to configure various security settings for a Gateway.
Appears in:
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | SecurityPolicy |
metadata ObjectMeta | Refer to Kubernetes API documentation for fields of metadata . |
spec SecurityPolicySpec | Spec defines the desired state of SecurityPolicy. |
SecurityPolicyList
SecurityPolicyList contains a list of SecurityPolicy resources.
Field | Description |
---|
apiVersion string | gateway.envoyproxy.io/v1alpha1 |
kind string | SecurityPolicyList |
metadata ListMeta | Refer to Kubernetes API documentation for fields of metadata . |
items SecurityPolicy array | |
SecurityPolicySpec
SecurityPolicySpec defines the desired state of SecurityPolicy.
Appears in:
Field | Description |
---|
targetRef PolicyTargetReferenceWithSectionName | TargetRef is the name of the Gateway resource this policy is being attached to. This Policy and the TargetRef MUST be in the same namespace for this Policy to have effect and be applied to the Gateway. TargetRef |
cors CORS | CORS defines the configuration for Cross-Origin Resource Sharing (CORS). |
jwt JWT | JWT defines the configuration for JSON Web Token (JWT) authentication. |
ServiceType
Underlying type: string
ServiceType string describes ingress methods for a service
Appears in:
SourceMatch
Appears in:
StringMatch
StringMatch defines how to match any strings. This is a general purpose match condition that can be used by other EG APIs that need to match against a string.
Appears in:
Field | Description |
---|
type MatchType | Type specifies how to match against a string. |
value string | Value specifies the string value that the match must have. |
TCPKeepalive
TCPKeepalive define the TCP Keepalive configuration.
Appears in:
Field | Description |
---|
probes integer | The total number of unacknowledged probes to send before deciding the connection is dead. Defaults to 9. |
idleTime Duration | The duration a connection needs to be idle before keep-alive probes start being sent. The duration format is Defaults to 7200s . |
interval Duration | The duration between keep-alive probes. Defaults to 75s . |
TracingProvider
Appears in:
Field | Description |
---|
type TracingProviderType | Type defines the tracing provider type. EG currently only supports OpenTelemetry. |
host string | Host define the provider service hostname. |
port integer | Port defines the port the provider service is exposed on. |
TracingProviderType
Underlying type: string
Appears in:
XDSTranslatorHook
Underlying type: string
XDSTranslatorHook defines the types of hooks that an Envoy Gateway extension may support for the xds-translator
Appears in:
XDSTranslatorHooks
XDSTranslatorHooks contains all the pre and post hooks for the xds-translator runner.
Appears in:
5 - Get Involved
This section includes contents related to Contributions
5.1 - Roadmap
This section records the roadmap of Envoy Gateway.
This document serves as a high-level reference for Envoy Gateway users and contributors to understand the direction of
the project.
Contributing to the Roadmap
- To add a feature to the roadmap, create an issue or join a community meeting to discuss your use
case. If your feature is accepted, a maintainer will assign your issue to a release milestone and update
this document accordingly.
- To help with an existing roadmap item, comment on or assign yourself to the associated issue.
- If a roadmap item doesn’t have an issue, create one, assign yourself to the issue, and reference this document. A
maintainer will submit a pull request to add the feature to the roadmap. Note: The feature should be
discussed in an issue or a community meeting before implementing it.
If you don’t know where to start contributing, help is needed to reduce technical, automation, and documentation debt.
Look for issues with the help wanted
label to get started.
Details
Roadmap features and timelines may change based on feedback, community contributions, etc. If you depend on a specific
roadmap item, you’re encouraged to attend a community meeting to discuss the details, or help us deliver the feature by
contributing to the project.
Last Updated: April 2023
v0.2.0: Establish a Solid Foundation
- Complete the core Envoy Gateway implementation- Issue #60.
- Establish initial testing, e2e, integration, etc- Issue #64.
- Establish user and developer project documentation- Issue #17.
- Achieve Gateway API conformance (e.g. routing, LB, Header transformation, etc.)- Issue #65.
- Setup a CI/CD pipeline- Issue #63.
v0.3.0: Drive Advanced Features through Extension Mechanisms
v0.4.0: Customizing Envoy Gateway
- Extending Envoy Gateway control plane Issue #20
- Helm based installation for Envoy Gateway Issue #650
- Customizing managed Envoy Proxy Kubernetes resource fields Issue #648
- Configuring xDS Bootstrap Issue #31
v0.5.0: Observability and Scale
v0.6.0: Preparation for GA
5.2 - Developer Guide
This section tells how to develop Envoy Gateway.
Envoy Gateway is built using a make-based build system. Our CI is based on Github Actions using workflows.
Prerequisites
go
make
docker
python3
- Need a
python3
program - Must have a functioning
venv
module; this is part of the standard
library, but some distributions (such as Debian and Ubuntu) replace
it with a stub and require you to install a python3-venv
package
separately.
Quickstart
- Run
make help
to see all the available targets to build, test and run Envoy Gateway.
Building
- Run
make build
to build all the binaries. - Run
make build BINS="envoy-gateway"
to build the Envoy Gateway binary. - Run
make build BINS="egctl"
to build the egctl binary.
Note: The binaries get generated in the bin/$OS/$ARCH
directory, for example, bin/linux/amd64/
.
Testing
Running Linters
- Run
make lint
to make sure your code passes all the linter checks.
Note: The golangci-lint
configuration resides here.
Building and Pushing the Image
- Run
IMAGE=docker.io/you/gateway-dev make image
to build the docker image. - Run
IMAGE=docker.io/you/gateway-dev make push-multiarch
to build and push the multi-arch docker image.
Note: Replace IMAGE
with your registry’s image name.
Deploying Envoy Gateway for Test/Dev
- Run
make create-cluster
to create a Kind cluster.
Option 1: Use the Latest gateway-dev Image
- Run
TAG=latest make kube-deploy
to deploy Envoy Gateway in the Kind cluster using the latest image. Replace latest
to use a different image tag.
Option 2: Use a Custom Image
- Run
make kube-install-image
to build an image from the tip of your current branch and load it in the Kind cluster. - Run
IMAGE_PULL_POLICY=IfNotPresent make kube-deploy
to install Envoy Gateway into the Kind cluster using your custom image.
Deploying Envoy Gateway in Kubernetes
- Run
TAG=latest make kube-deploy
to deploy Envoy Gateway using the latest image into a Kubernetes cluster (linked to
the current kube context). Preface the command with IMAGE
or replace TAG
to use a different Envoy Gateway image or
tag. - Run
make kube-undeploy
to uninstall Envoy Gateway from the cluster.
Note: Envoy Gateway is tested against Kubernetes v1.24.0.
Demo Setup
- Run
make kube-demo
to deploy a demo backend service, gatewayclass, gateway and httproute resource
(similar to steps outlined in the Quickstart docs) and test the configuration. - Run
make kube-demo-undeploy
to delete the resources created by the make kube-demo
command.
The commands below deploy Envoy Gateway to a Kubernetes cluster and run the Gateway API conformance tests. Refer to the
Gateway API conformance homepage to learn more about the tests. If Envoy Gateway is already installed, run
TAG=latest make run-conformance
to run the conformance tests.
On a Linux Host
- Run
TAG=latest make conformance
to create a Kind cluster, install Envoy Gateway using the latest gateway-dev
image, and run Gateway API conformance tests.
On a Mac Host
Since Mac doesn’t support directly exposing the Docker network to the Mac host, use one of the following
workarounds to run conformance tests:
- Deploy your own Kubernetes cluster or use Docker Desktop with Kubernetes support and then run
TAG=latest make kube-deploy run-conformance
. This will install Envoy Gateway using the latest gateway-dev image
to the Kubernetes cluster using the current kubectl context and run the conformance tests. Use make kube-undeploy
to
uninstall Envoy Gateway. - Install and run Docker Mac Net Connect and then run
TAG=latest make conformance
.
Note: Preface commands with IMAGE
or replace TAG
to use a different Envoy Gateway image or tag. If TAG
is unspecified, the short SHA of your current branch is used.
Debugging the Envoy Config
An easy way to view the envoy config that Envoy Gateway is using is to port-forward to the admin interface port
(currently 19000
) on the Envoy deployment that corresponds to a Gateway so that it can be accessed locally.
Get the name of the Envoy deployment. The following example is for Gateway eg
in the default
namespace:
export ENVOY_DEPLOYMENT=$(kubectl get deploy -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-namespace=default,gateway.envoyproxy.io/owning-gateway-name=eg -o jsonpath='{.items[0].metadata.name}')
Port forward the admin interface port:
kubectl port-forward deploy/${ENVOY_DEPLOYMENT} -n envoy-gateway-system 19000:19000
Now you are able to view the running Envoy configuration by navigating to 127.0.0.1:19000/config_dump
.
There are many other endpoints on the Envoy admin interface that may be helpful when debugging.
JWT Testing
An example JSON Web Token (JWT) and JSON Web Key Set (JWKS) are used for the request authentication
user guide. The JWT was created by the JWT Debugger, using the RS256
algorithm. The public key from the JWTs
verify signature was copied to JWK Creator for generating the JWK. The JWK Creator was configured with matching
settings, i.e. Signing
public key use and the RS256
algorithm. The generated JWK was wrapped in a JWKS structure
and is hosted in the repo.
5.3 - Contributing
This section tells how to contribute to Envoy Gateway.
We welcome contributions from the community. Please carefully review the project goals
and following guidelines to streamline your contributions.
Communication
- Before starting work on a major feature, please contact us via GitHub or Slack. We will ensure no
one else is working on it and ask you to open a GitHub issue.
- A “major feature” is defined as any change that is > 100 LOC altered (not including tests), or
changes any user-facing behavior. We will use the GitHub issue to discuss the feature and come to
agreement. This is to prevent your time being wasted, as well as ours. The GitHub review process
for major features is also important so that affiliations with commit access can
come to agreement on the design. If it’s appropriate to write a design document, the document must
be hosted either in the GitHub issue, or linked to from the issue and hosted in a world-readable
location.
- Small patches and bug fixes don’t need prior communication.
Inclusivity
The Envoy Gateway community has an explicit goal to be inclusive to all. As such, all PRs must adhere
to the following guidelines for all code, APIs, and documentation:
- The following words and phrases are not allowed:
- Whitelist: use allowlist instead.
- Blacklist: use denylist or blocklist instead.
- Master: use primary instead.
- Slave: use secondary or replica instead.
- Documentation should be written in an inclusive style. The Google developer
documentation contains an excellent
reference on this topic.
- The above policy is not considered definitive and may be amended in the future as industry best
practices evolve. Additional comments on this topic may be provided by maintainers during code
review.
Submitting a PR
- Fork the repo.
- Hack
- DCO sign-off each commit. This can be done with
git commit -s
. - Submit your PR.
- Tests will automatically run for you.
- We will not merge any PR that is not passing tests.
- PRs are expected to have 100% test coverage for added code. This can be verified with a coverage
build. If your PR cannot have 100% coverage for some reason please clearly explain why when you
open it.
- Any PR that changes user-facing behavior must have associated documentation in the docs folder of the repo as
well as the changelog.
- All code comments and documentation are expected to have proper English grammar and punctuation.
If you are not a fluent English speaker (or a bad writer ;-)) please let us know and we will try
to find some help but there are no guarantees.
- Your PR title should be descriptive, and generally start with type that contains a subsystem name with
()
if necessary
and summary followed by a colon. format chore/docs/feat/fix/refactor/style/test: summary
.
Examples:- “docs: fix grammar error”
- “feat(translator): add new feature”
- “fix: fix xx bug”
- “chore: change ci & build tools etc”
- Your PR commit message will be used as the commit message when your PR is merged. You should
update this field if your PR diverges during review.
- Your PR description should have details on what the PR does. If it fixes an existing issue it
should end with “Fixes #XXX”.
- If your PR is co-authored or based on an earlier PR from another contributor,
please attribute them with
Co-authored-by: name <name@example.com>
. See
GitHub’s multiple author
guidance
for further details. - When all tests are passing and all other conditions described herein are satisfied, a maintainer
will be assigned to review and merge the PR.
- Once you submit a PR, please do not rebase it. It’s much easier to review if subsequent commits
are new commits and/or merges. We squash and merge so the number of commits you have in the PR
doesn’t matter.
- We expect that once a PR is opened, it will be actively worked on until it is merged or closed.
We reserve the right to close PRs that are not making progress. This is generally defined as no
changes for 7 days. Obviously PRs that are closed due to lack of activity can be reopened later.
Closing stale PRs helps us to keep on top of all the work currently in flight.
Maintainer PR Review Policy
- See CODEOWNERS.md for the current list of maintainers.
- A maintainer representing a different affiliation from the PR owner is required to review and
approve the PR.
- When the project matures, it is expected that a “domain expert” for the code the PR touches should
review the PR. This person does not require commit access, just domain knowledge.
- The above rules may be waived for PRs which only update docs or comments, or trivial changes to
tests and tools (where trivial is decided by the maintainer in question).
- If there is a question on who should review a PR please discuss in Slack.
- Anyone is welcome to review any PR that they want, whether they are a maintainer or not.
- Please make sure that the PR title, commit message, and description are updated if the PR changes
significantly during review.
- Please clean up the title and body before merging. By default, GitHub fills the squash merge
title with the original title, and the commit body with every individual commit from the PR.
The maintainer doing the merge should make sure the title follows the guidelines above and should
overwrite the body with the original commit message from the PR (cleaning it up if necessary)
while preserving the PR author’s final DCO sign-off.
Decision making
This is a new and complex project, and we need to make a lot of decisions very quickly.
To this end, we’ve settled on this process for making (possibly contentious) decisions:
- For decisions that need a record, we create an issue.
- In that issue, we discuss opinions, then a maintainer can call for a vote in a comment.
- Maintainers can cast binding votes on that comment by reacting or replying in another comment.
- Non-maintainer community members are welcome to cast non-binding votes by either of these methods.
- Voting will be resolved by simple majority.
- In the event of deadlocks, the question will be put to steering instead.
DCO: Sign your work
The sign-off is a simple line at the end of the explanation for the
patch, which certifies that you wrote it or otherwise have the right to
pass it on as an open-source patch. The rules are pretty simple: if you
can certify the below (from
developercertificate.org):
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe@gmail.com>
using your real name (sorry, no pseudonyms or anonymous contributions.)
You can add the sign-off when creating the git commit via git commit -s
.
If you want this to be automatic you can set up some aliases:
git config --add alias.amend "commit -s --amend"
git config --add alias.c "commit -s"
Fixing DCO
If your PR fails the DCO check, it’s necessary to fix the entire commit history in the PR. Best
practice is to squash
the commit history to a single commit, append the DCO sign-off as described above, and force
push. For example, if you have 2 commits in
your history:
git rebase -i HEAD^^
(interactive squash + DCO append)
git push origin -f
Note, that in general rewriting history in this way is a hindrance to the review process and this
should only be done to correct a DCO mistake.
5.4 - Code of Conduct
This section includes Code of Conduct of Envoy Gateway.
Envoy Gateway follows the CNCF Code of Conduct.
5.5 - Maintainers
This section includes Maintainers of Envoy Gateway.
The following maintainers, listed in alphabetical order, own everything
- @Alice-Lilith
- @arkodg
- @Xunzhuo
- @zirain
- @qicz
Emeritus Maintainers
- @danehans
- @alexgervais
- @skriss
- @youngnick
5.6 - Release Process
This section tells the release process of Envoy Gateway.
This document guides maintainers through the process of creating an Envoy Gateway release.
Release Candidate
The following steps should be used for creating a release candidate.
Prerequisites
- Permissions to push to the Envoy Gateway repository.
Set environment variables for use in subsequent steps:
export MAJOR_VERSION=0
export MINOR_VERSION=3
export RELEASE_CANDIDATE_NUMBER=1
export GITHUB_REMOTE=origin
Clone the repo, checkout the main
branch, ensure it’s up-to-date, and your local branch is clean.
Create a topic branch for adding the release notes and updating the VERSION file with the release version. Refer to previous release notes and VERSION for additional details.
Sign, commit, and push your changes to your fork.
Submit a Pull Request to merge the changes into the main
branch. Do not proceed until your PR has merged and
the Build and Test has successfully completed.
Create a new release branch from main
. The release branch should be named
release/v${MAJOR_VERSION}.${MINOR_VERSION}
, e.g. release/v0.3
.
git checkout -b release/v${MAJOR_VERSION}.${MINOR_VERSION}
Push the branch to the Envoy Gateway repo.
git push ${GITHUB_REMOTE} release/v${MAJOR_VERSION}.${MINOR_VERSION}
Create a topic branch for updating the Envoy proxy image and Envoy Ratelimit image to the tag supported by the release. Reference PR #2098
for additional details on updating the image tag.
Sign, commit, and push your changes to your fork.
Submit a Pull Request to merge the changes into the release/v${MAJOR_VERSION}.${MINOR_VERSION}
branch. Do not
proceed until your PR has merged into the release branch and the Build and Test has completed for your PR.
Ensure your release branch is up-to-date and tag the head of your release branch with the release candidate number.
git tag -a v${MAJOR_VERSION}.${MINOR_VERSION}.0-rc.${RELEASE_CANDIDATE_NUMBER} -m 'Envoy Gateway v${MAJOR_VERSION}.${MINOR_VERSION}.0-rc.${RELEASE_CANDIDATE_NUMBER} Release Candidate'
Push the tag to the Envoy Gateway repository.
git push ${GITHUB_REMOTE} v${MAJOR_VERSION}.${MINOR_VERSION}.0-rc.${RELEASE_CANDIDATE_NUMBER}
This will trigger the release GitHub action that generates the release, release artifacts, etc.
Confirm that the release workflow completed successfully.
Confirm that the Envoy Gateway image with the correct release tag was published to Docker Hub.
Confirm that the release was created.
Note that the Quickstart Guide references are not updated for release candidates. However, test
the quickstart steps using the release candidate by manually updating the links.
Generate the GitHub changelog.
Ensure you check the “This is a pre-release” checkbox when editing the GitHub release.
If you find any bugs in this process, please create an issue.
Setup cherry picker action
After release branch cut, RM (Release Manager) should add job cherrypick action for target release.
Configuration looks like following:
cherry_pick_release_v0_4:
runs-on: ubuntu-latest
name: Cherry pick into release-v0.4
if: ${{ contains(github.event.pull_request.labels.*.name, 'cherrypick/release-v0.4') && github.event.pull_request.merged == true }}
steps:
- name: Checkout
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
fetch-depth: 0
- name: Cherry pick into release/v0.4
uses: carloscastrojumo/github-cherry-pick-action@a145da1b8142e752d3cbc11aaaa46a535690f0c5 # v1.0.9
with:
branch: release/v0.4
title: "[release/v0.4] {old_title}"
body: "Cherry picking #{old_pull_request_id} onto release/v0.4"
labels: |
cherrypick/release-v0.4
# put release manager here
reviewers: |
Alice-Lilith
Replace v0.4
with real branch name, and Alice-Lilith
with the real name of RM.
Minor Release
The following steps should be used for creating a minor release.
Prerequisites
- Permissions to push to the Envoy Gateway repository.
- A release branch that has been cut from the corresponding release candidate. Refer to the
Release Candidate section for additional details on cutting a release candidate.
Set environment variables for use in subsequent steps:
export MAJOR_VERSION=0
export MINOR_VERSION=3
export GITHUB_REMOTE=origin
Clone the repo, checkout the main
branch, ensure it’s up-to-date, and your local branch is clean.
Create a topic branch for adding the release notes, release announcement, and versioned release docs.
- Create the release notes. Reference previous release notes for additional details. Note: The release
notes should be an accumulation of the release candidate release notes and any changes since the release
candidate.
- Create a release announcement. Refer to PR #635 as an example release announcement.
- Include the release in the compatibility matrix. Refer to PR #1002 as an example.
- Generate the versioned release docs:
make docs-release TAG=v${MAJOR_VERSION}.${MINOR_VERSION}.0
- Update the
Get Started
and Contributing
button referred link in site/content/en/_index.md
:
<a class="btn btn-lg btn-primary me-3 mb-4" href="/v0.5.0">
Get Started <i class="fas fa-arrow-alt-circle-right ms-2"></i>
</a>
<a class="btn btn-lg btn-secondary me-3 mb-4" href="/v0.5.0/contributions">
Contributing <i class="fa fa-heartbeat ms-2 "></i>
</a>
- Uodate the
Documentation
referred link on the menu in site/hugo.toml
:
[[menu.main]]
name = "Documentation"
weight = -101
pre = "<i class='fas fa-book pr-2'></i>"
url = "/v0.5.0"
Sign, commit, and push your changes to your fork.
Submit a Pull Request to merge the changes into the main
branch. Do not proceed until all your PRs have merged
and the Build and Test has completed for your final PR.
Checkout the release branch.
git checkout -b release/v${MAJOR_VERSION}.${MINOR_VERSION} $GITHUB_REMOTE/release/v${MAJOR_VERSION}.${MINOR_VERSION}
If the tip of the release branch does not match the tip of main
, perform the following:
Create a topic branch from the release branch.
Cherry-pick the commits from main
that differ from the release branch.
Run tests locally, e.g. make lint
.
Sign, commit, and push your topic branch to your Envoy Gateway fork.
Submit a PR to merge the topic from of your fork into the Envoy Gateway release branch.
Do not proceed until the PR has merged and CI passes for the merged PR.
If you are still on your topic branch, change to the release branch:
git checkout release/v${MAJOR_VERSION}.${MINOR_VERSION}
Ensure your local release branch is up-to-date:
git pull $GITHUB_REMOTE release/v${MAJOR_VERSION}.${MINOR_VERSION}
Tag the head of your release branch with the release tag. For example:
git tag -a v${MAJOR_VERSION}.${MINOR_VERSION}.0 -m 'Envoy Gateway v${MAJOR_VERSION}.${MINOR_VERSION}.0 Release'
Note: The tag version differs from the release branch by including the .0
patch version.
Push the tag to the Envoy Gateway repository.
git push origin v${MAJOR_VERSION}.${MINOR_VERSION}.0
This will trigger the release GitHub action that generates the release, release artifacts, etc.
Confirm that the release workflow completed successfully.
Confirm that the Envoy Gateway image with the correct release tag was published to Docker Hub.
Confirm that the release was created.
Confirm that the steps in the Quickstart Guide work as expected.
Generate the GitHub changelog and include the following text at the beginning of the release page:
# Release Announcement
Check out the [v${MAJOR_VERSION}.${MINOR_VERSION} release announcement]
(https://gateway.envoyproxy.io/releases/v${MAJOR_VERSION}.${MINOR_VERSION}.html) to learn more about the release.
If you find any bugs in this process, please create an issue.
Announce the Release
It’s important that the world knows about the release. Use the following steps to announce the release.
Set the release information in the Envoy Gateway Slack channel. For example:
Envoy Gateway v${MAJOR_VERSION}.${MINOR_VERSION} has been released: https://github.com/envoyproxy/gateway/releases/tag/v${MAJOR_VERSION}.${MINOR_VERSION}.0
Send a message to the Envoy Gateway Slack channel. For example:
On behalf of the entire Envoy Gateway community, I am pleased to announce the release of Envoy Gateway
v${MAJOR_VERSION}.${MINOR_VERSION}. A big thank you to all the contributors that made this release possible.
Refer to the official v${MAJOR_VERSION}.${MINOR_VERSION} announcement for release details and the project docs
to start using Envoy Gateway.
...
Link to the GitHub release and release announcement page that highlights the release.
5.7 - Working on Envoy Gateway Docs
This section tells the development of Envoy Gateway Documents.
The documentation for the Envoy Gateway lives in the site/content/en
directory. Any
individual document can be written using Markdown.
Documentation Structure
We supported the versioned Docs now, the directory name under docs represents
the version of docs. The root of the latest site is in site/content/en/latest
.
This is probably where to start if you’re trying to understand how things fit together.
Note that the new contents should be added to site/content/en/latest
and will be cut off at
the next release. The contents under site/content/en/v0.5.0
are auto-generated,
and usually do not need to make changes to them, unless if you find the current release pages have
some incorrect contents. If so, you should send a PR to update contents both of site/content/en/latest
and site/content/en/v0.5.0
.
You can access the website which represents the current release in default,
and you can access the website which contains the latest version changes in
Here or at the footer of the pages.
Documentation Workflow
To work with the docs, just edit Markdown files in site/content/en/latest
,
then run
This will create site/public
with the built HTML pages. You can preview it
by running:
If you want to generate a new release version of the docs, like v0.6.0
, then run
make docs-release TAG=v0.6.0
This will update the VERSION file at the project root, which records current release version,
and it will be used in the pages version context and binary version output. Also, this will generate
new dir site/content/en/v0.6.0
, which contains docs at v0.6.0 and updates artifact links to v0.6.0
in all files under site/content/en/v0.6.0/user
, like quickstart.md
, http-routing.md
and etc.
Publishing Docs
Whenever docs are pushed to main
, CI will publish the built docs to GitHub
Pages. For more details, see .github/workflows/docs.yaml
.
Reference
Go to Hugo and Docsy to learn more.