Kedges starts up a gRPC and reverse proxy.
Kedge is driven through two main configuration files that allows to add route -> backend pairs. Thanks of that Kedge knows how to route the request.
Configuration for backends:
--kedge_config_backendpool_config
command line content or read from file using --kedge_config_backendpool_config_path
:
{
"grpc": {
"backends": [
{
"name": "controller",
"balancer": "ROUND_ROBIN",
"interceptors": [
{
"prometheus": true
}
],
"srv": {
"dns_name": "controller.eu-prod.internal.example.com"
}
}
]
},
"http": {
"backends": [
{
"name": "controller",
"balancer": "ROUND_ROBIN",
"k8s": {
"dns_port_name": "controller.default:http"
},
}
]
}
}
Configuration for routes:
--kedge_config_director_config
command line content or read from file using --kedge_config_director_config_path
:
{
"grpc": {
"routes": [
{
"backend_name": "controller",
"service_name_matcher": "*",
"authority_host_matcher": "controller.ext.cluster.local"
}
]
},
"http": {
"routes": [
{
"backend_name": "controller",
"host_matcher": "controller.ext.cluster.local",
"port_matcher": 8081
}
],
"adhoc_rules": [
{
"dns_name_matcher": "*.pod.cluster.local",
"port": {
"allowed_ranges": [
{
"from": 40,
"to": 10000
}
]
}
}
]
}
}
See go run cmd/kedge/*.go --help
for other flags to configure items like:
- listen addresses
- certs
- OIDC
- HTTP/gRPC options
- dynamic discovery
- logging
Here's an example that runs the server listening on four ports (80 for debug HTTP, 443 for HTTPS+gRPCTLS, 444 for gRPCTLS), and requiring client side certs:
go run ./cmd/kedge/*.go \
--server_grpc_tls_port=444 \
--server_http_port=80 \
--server_http_tls_port=443 \
--server_tls_cert_file=misc/localhost.crt \
--server_tls_key_file=misc/localhost.key \
--server_tls_client_ca_files=misc/ca.crt \
--server_tls_client_cert_required=true \
--kedge_config_director_config_path=misc/director.json \
--kedge_config_backendpool_config_path=misc/backendpool.json
Optionally you can skip client's side cert requirement and perform authorization based on JWT OIDC ID token (in case you are already have some OIDC provider running, that supports filling permissions into ID token claim):
go run ./cmd/kedge/*.go \
--server_grpc_tls_port=444 \
--server_http_port=80 \
--server_http_tls_port=443 \
--server_tls_cert_file=misc/localhost.crt \
--server_tls_key_file=misc/localhost.key \
--server_tls_client_cert_required=false \
--kedge_config_director_config_path=misc/director.json \
--kedge_config_backendpool_config_path=misc/backendpool.json \
--server_oidc_provider_url="<https://issuer.example.org>" \
--server_oidc_client_id="<some-client-id>" \
--server_oidc_perms_claim=perms \
--server_oidc_required_perm="perms-prod-example"
Running it locally with k8s resolver or dynamic routing discovery requires access to k8s cluster. You can add that by adding flags:
--k8sclient_kubeapi_url="<kubernetes master URL (usually with port 6443)>"
--k8sclient_tls_insecure
To gain access you need to pass either token:
--k8client_token_file="<file with simple one-line token. By default it is /var/run/secrets/kubernetes.io/serviceaccount/token>"
Or user from you kube/config:
--k8sclient_kubeconfig_user="<user from kubeconf>"
Default values are designed to be working from the pod, so when deploying, kedge should not require any flags for that.
Dynamic routing discovery is a convenient way and addition to manually created routings and backends
(passed by --kedge_config_director_config_path
and --kedge_config_backendpool_config_path
flags)
Routing Discovery allows to get fresh director and backendpool configuration filled with autogenerated routings based on service annotations.
It watches every service, from whichever namespace, that has a label named <discovery_label_annotation_prefix>/kedge-exposed.
It goes through every service's spec port's and generates routings->backend pair.
For each spec in form like:
kind: Service
metadata:
labels:
<discovery_label_annotation_prefix>/kedge-exposed: "true"
...
spec:
ports:
- port: 1234
name: "http-something"
targetPort: "pods-port"
It generates HTTP route in form of:
{
"backend_name": "<service-name>_<namespace>_pods-port",
"host_matcher": "<service-name>.<namespace>.svc.<--discovery_external_domain_suffix>",
"proxy_mode": "REVERSE_PROXY",
"port_matcher": 1234,
"autogenerated": true
}
Host matcher is in form of common Kubernetes short service name like default KubeDNS does.
And HTTP backend in form of:
{
"name": "<service-name>_<namespace>_pods-port",
"k8s": {
"dns_port_name": "<service-name>.<namespace>:pods-port"
},
"autogenerated": true
}
This is assumed to be HTTP because the name starts http-<...>
(it can be as well exactly named http
).
Similar for GRPC if name is grpc
or starts from grpc-<...>
NOTE: If you your backend is behind TLS, name your port httptls
or httptls-<...>
for HTTPS or grpctls
or grpctls-<...>
for setup insecure TLS between kedge and backend. Currently only insecure
local cluster traffic is allowed, since there is no easy way to configure proper certs for port in k8s service.yaml
itself. This can be implemented in future though.
If you wish to override host_matcher or service_name_matcher use annotations:
<--discovery_label_annotation_prefix>host-matcher = <domain>
<--discovery_label_annotation_prefix>service-name-matcher = <domain>
NOTE:
- backend name is always in form of
<service>_<namespace>_<port-name>
- if no port name is provided or port name is not in form of grpc- or http- it is silently ignored (!)
- TargetPort can be in both (pod) port name or port number form.
- no check for duplicated host_matchers in annotations or between autogenerated & base ones (!)
- no check if the target port inside service actually exists.