-
Notifications
You must be signed in to change notification settings - Fork 7.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to configure envoyfilter to support ratelimit in istio 1.5.0? #22068
Comments
@sd797994 |
@catman002 there is an envoy ratelimit example: https://github.com/jbarratt/envoy_ratelimit_example can help you I hope。simple strategies only, if your mixer policy are not too complicated... |
@gargnupur Is there work going on to provide an example set up using envoy rate limit filter? |
@bianpengyuan @gargnupur After much trial and error, here is a working template for rate-limiting for the default Istio Ingress gateway apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
workloadSelector:
# select by label in the same namespace
labels:
istio: ingressgateway
configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: test
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.default.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.default.svc.cluster.local
port_value: 8081
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "*:80"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions: # any actions in here
# Multiple actions nest the descriptors
# https://www.envoyproxy.io/docs/envoy/latest/configuration/http/http_filters/rate_limit_filter#config-http-filters-rate-limit-composing-actions
# - generic_key:
# descriptor_value: "test"
- request_headers:
header_name: "Authorization"
descriptor_key: "auth"
# - remote_address: {}
# - destination_cluster: {} |
Do you have any plan in order to reduce the complexity of the configuration for the rate limit? Seems a feature of a service mesh and implemented in other ones. |
@jsenon : would like to know the pain points you are facing as that would help us know what we need to improve and we can take care of it in the next release of Istio... |
@devstein : Great it worked for you and thanks for the example!! Can you share any problems that you faced or improvements that you would like to see... |
Hi @gargnupur thanks for your reply. Gloo or ambassador, have implemented a simple way of configuration, why do not add the rate limiting feature in the virtual service, or have only one CRD rate-limiter that will translate simple user configuration to envoy proxy:
|
Hi @gargnupur thanks for tackling this! The two biggest challenges I faced were:
Let me know if I can help in any other way! |
If you don't mind me asking, how would you pass in the Lyft config into these
@devstein From the snippet you kindly provided, I can only see the filters to match for certain header. But where did you put the corresponding configuration in regards to how many requests per unit time is allowed? Thanks! |
@songford An example rate limit config for snipped I provided would be: domain: test
descriptors:
# match the descriptor_key from the EnvoyFilter
- key: auth
# Do not include a value unless you know what auth value you want to rate limit (i.e a specific API_KEY)
rate_limit: # describe the rate limit
unit: minute
requests_per_unit: 60 This config is loaded by the ratelimit service you defined in If you wanted to filter by domain: test
descriptors:
# Naively rate-limit by IP
- key: remote_address
rate_limit:
unit: minute
requests_per_unit: 60 I hope this helps! |
@devstein Thanks a lot! It really helps!
Several points that I wish to bring up in hope to help folks who run into this post with similar requirements:
|
Any plans to support this natively in istio? |
@songford @bianpengyuan @gargnupur @devstein Can some body look this below configuration and help us? It does not created routes entry(RDS) in envoy config_dump but cluster entry(CDS) is there.
Our Service Entry:
|
@VinothChinnadurai: can you share your config_dump ? Config looks ok.. For reference: I followed examples above and this has been working for me: https://github.com/istio/istio/compare/master...gargnupur:nup_try_ratelimit_envoy?expand=1#diff-87007efb70dda4500545ba652cb0b30e |
What does your rate limit service config look like? Have you tried simplifying your rate limit actions as a sanity check? (i.e only use Also, did you try explicitly to create a If you can, post your configPatches:
# The Envoy config you want to modify
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
config:
# domain can be anything! Match it to the ratelimter service config
domain: test
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_service
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.default.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: endpoint-new.default.svc.cluster.local
port_value: 81 |
@gargnupur @devstein First of all, thanks a lot for your responses. @gargnupur @devstein
This is the host on which we are trying to apply ratelimit: abcdefghi.xxx.com
Kindly unblock us by suggesting what is the issue here.. |
I was referring to the envoy proxy ratelimit service but I see you are using a custom GRPC service.
I'm referring to simplifying the rate limit actions. See below apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: "*:80"
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions:
- remote_address: {}
What version of Istio are you using? @VinothChinnadurai Your route definition looks correct. Unfortunately, I'm not sure as to what your issue is. As a next step, I suggest enabling debug level logging on your ingress gateway pod to see what is going on. kubectl -n istio-system exec svc/istio-ingressgateway -- curl -X POST "localhost:15000/logging?filter=debug" -s
kubectl -n istio-system logs svc/istio-ingressgateway -f
# make requests via another terminal |
@devstein @gargnupur
The above one applied without any issue We tried sanity using remote_address:{} only as you mentioned and call reaches to our ratelimit service :) But if we try with necessary headers(removed the remote_address:{})(as like above manifests) Does that mean, the issue is with headers? Kindly suggest what is the issue here? |
Does all the headers have values in the actual request? Please note if the request does not have value for any of those headers, Envoy skips calling the rate limit service. See this issue envoyproxy/envoy#10124 in Envoy |
Sure @ramaraochavali . Lets check with my ratelimit service team and try only with supported headers and come back. |
@ramaraochavali @devstein @gargnupur Thanks a lot guys for all your responses. It is working now as if we pass all headers in the request matched with request_headers(Under rate_limits.actions) I have two questions here.
One this header_name should match with what we sending in the request to this IG envoy and descriptor_key is something we will make it as ** header ** for all outbound request with descriptor_value as its value. Say in the above case
It will become {"host":"abcd.xxx.com","PATH":"/api/v2/tickets"} as request while sending to our Ratemit service from IstioGateway?
Kindly suggest. Thanks once again!!! |
@VinothChinnadurai : yes for the first question. |
Thanks a lot @gargnupur . I will try the same and come back |
@JaveriaK, @songford - applyTo: CLUSTER
match:
cluster:
service: ratelimit.rate-limit.svc.cluster.local
patch:
operation: ADD
value:
name: rate_limit_service
type: STRICT_DNS
connect_timeout: 0.25s
lb_policy: ROUND_ROBIN
http2_protocol_options: {}
hosts:
- socket_address:
address: ratelimit.rate-limit.svc.cluster.local
port_value: 8081 If we comment out this portion of envoy filter, no warnings in the log anymore. Let's clarify this as, actually when you deploy reate-limit service in k8s, Istio automaticlly recognize it and adds into clusters of envoy, the raw config looks like this: {
"version_info": "2020-10-19T11:25:53Z/83",
"cluster": {
"@type": "type.googleapis.com/envoy.api.v2.Cluster",
"name": "outbound|8080||ratelimit.rate-limit.svc.cluster.local",
"type": "EDS",
"eds_cluster_config": {
"eds_config": {
"ads": {}
},
"service_name": "outbound|8080||ratelimit.rate-limit.svc.cluster.local"
},
"connect_timeout": "10s",
"circuit_breakers": {
"thresholds": [
{
"max_connections": 4294967295,
"max_pending_requests": 4294967295,
"max_requests": 4294967295,
"max_retries": 4294967295
}
]
},
"http2_protocol_options": {
"max_concurrent_streams": 1073741824
},
"protocol_selection": "USE_DOWNSTREAM_PROTOCOL",
"filters": [
{
"name": "istio.metadata_exchange",
"typed_config": {
"@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
"type_url": "type.googleapis.com/envoy.tcp.metadataexchange.config.MetadataExchange",
"value": {
"protocol": "istio-peer-exchange"
}
}
}
],
"transport_socket_matches": [
{
"name": "tlsMode-istio",
"match": {
"tlsMode": "istio"
},
"transport_socket": {
"name": "envoy.transport_sockets.tls",
"typed_config": {
"@type": "type.googleapis.com/envoy.api.v2.auth.UpstreamTlsContext",
"common_tls_context": {
"alpn_protocols": [
"istio-peer-exchange",
"istio",
"h2"
],
"tls_certificate_sds_secret_configs": [
{
"name": "default",
"sds_config": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "sds-grpc"
}
}
]
}
}
}
],
"combined_validation_context": {
"default_validation_context": {
"match_subject_alt_names": [
{
"exact": "spiffe://cluster.local/ns/rate-limit/sa/default"
}
]
},
"validation_context_sds_secret_config": {
"name": "ROOTCA",
"sds_config": {
"api_config_source": {
"api_type": "GRPC",
"grpc_services": [
{
"envoy_grpc": {
"cluster_name": "sds-grpc"
}
}
]
}
}
}
}
},
"sni": "outbound_.8080_._.ratelimit.rate-limit.svc.cluster.local"
}
}
},
{
"name": "tlsMode-disabled",
"match": {},
"transport_socket": {
"name": "envoy.transport_sockets.raw_buffer"
}
}
]
},
"last_updated": "2020-10-19T11:26:40.458Z"
} Envoy filter , when applied adds this portion of cluster config: {
"version_info": "2020-10-21T09:35:44Z/7",
"cluster": {
"@type": "type.googleapis.com/envoy.api.v2.Cluster",
"name": "rate_limit_service",
"type": "STRICT_DNS",
"connect_timeout": "0.250s",
"hosts": [
{
"socket_address": {
"address": "ratelimit.rate-limit.svc.cluster.local",
"port_value": 8081
}
}
],
"http2_protocol_options": {}
},
"last_updated": "2020-10-21T09:35:44.779Z"
} So the 2 clusters are about the same destination, and pilot somehow treated them as duplicates? |
Hi Guys, Any idea if there is a way to use rateLimit for outbound https traffic? E.g. I would like to have only 5 requests per second from pod XYZ to https://google.com? |
@jdomag : You can use other envoy descriptors like remote_address that are not dependent on HTTP headers? |
Did you try in Istio 1.7.* ? |
Did you try on Istio 1.7.* |
@songford Did you try on Istio 1.7.4 ? |
I have designed an API, appreciate if anyone can leave your comments |
Hi This process is need for me, because I want to use cookie to metadata filter which is available from envoy v1.16. This is my sample configuration. ---
kind: EnvoyFilter
metadata:
name: filter-ratelimit
namespace: istio-system
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: envoy.http_connection_manager
subFilter:
name: envoy.router
patch:
operation: INSERT_BEFORE
value:
name: envoy.rate_limit
typed_config:
'@type': type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit
domain: ratelimit
failure_mode_deny: false
rate_limit_service:
grpc_service:
envoy_grpc:
cluster_name: rate_limit_cluster
timeout: 0.25s
- applyTo: CLUSTER
match:
cluster:
service: ratelimit.rate-limit.svc.cluster.local
patch:
operation: ADD
value:
connect_timeout: 0.25s
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: rate_limit_cluster
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: ratelimit.rate-limit.svc.cluster.local
port_value: 8081
name: dev-rate_limit_cluster
type: STRICT_DNS
workloadSelector:
labels:
--
kind: EnvoyFilter
metadata:
name: filter-ratelimit-svc
namespace: istio-system
spec:
configPatches:
- applyTo: VIRTUAL_HOST
match:
context: GATEWAY
routeConfiguration:
vhost:
name: *80
route:
action: ANY
patch:
operation: MERGE
value:
rate_limits:
- actions:
- dynamic_metadata:
descriptor_key: user
metadata_key:
key: envoy.lb
path:
- key: cookie
workloadSelector:
labels:
istio: ingressgateway
---
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: header-to-meta-filter
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_FILTER
match:
context: GATEWAY
listener:
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.header_metadata
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.header_to_metadata.v3.Config
request_rules:
- header: cookie
on_header_present:
metadata_namespace: envoy.lb
key: cookie
type: STRING
remove: false ConfigMap of ratelimit service apiVersion: v1
kind: ConfigMap
metadata:
name: ratelimit-config
namespace: rate-limit
data:
config.yaml: |
domain: ratelimit
descriptors:
- key: user
rate_limit:
unit: minute
requests_per_unit: 5 |
I've decided to use Egress TLS origination and Envoy Rate Limiting for particular service. I've described this in more details in below article if anybody is interested: |
Can you please provide the solution you used as we have a similar situation and we are not able to find how to use unit and requests per unit. |
@A-N-S : Please take a look at working tests in istio repo: https://github.com/istio/istio/blob/master/tests/integration/telemetry/policy/envoy_ratelimit_test.go . It sets up rate limit service using lyft too.. |
@gargnupur Thanks for the reference. Can you give some details about " {{ .RateLimitNamespace }} " and " {{ .EchoNamespace }} " used in https://github.com/istio/istio/blob/master/tests/integration/telemetry/policy/testdata/enable_envoy_ratelimit.yaml |
RateLimitNamespace -> namespace where lyft's redis rate limit service is setup |
We have tests in istio/istio for this, so closing the bug... |
Hi, i followed the official documentation for rate limiting and could not make get the global ratelimiting to work at the Gateway level. I added all the details in #32381 |
As far as I see, all examples for Envoy Rate limiting add a new cluster and use When using the Rate Limiting Service with Istio, there is already some kind of Cluster created by Istio
Is it possible to somehow use that Cluster with Envoy Rate Limiting? and hoping that GRPC Request balancing works somehow? |
@msonnleitner Is there any update to make distribute traffic evenly? |
@KoJJang could you plase share your config? We use ratelimit since 2 years |
@KoJJang Istio's rate limiting documentation was updated sometime ago, now it contains a config which should work:
See the given change here: https://github.com/istio/istio.io/pull/11654/files#diff-b20e3a9583a775ef679a0bc15a53c23aa9b6240757bd369d2ac81760072cd7d8R118 So since Istio's docs was updated to reference that cluster outbound|8081||ratelimit.default.svc.cluster.local, I guess it is safe to assume that this is supported and not just a "hack". |
@SCLogo
@msonnleitner
|
As per the updated Istio config, it should not be necessary to add that cluster manually. Try to just delete that section. IIRC, if you define a Kubernetes Service for ratelimiting, it should be "picked up" by Istio automatically. |
@msonnleitner
After then, I checked all requests are distributed to ratelimit pods evenly, but ratelimit doesn't work :-( |
I think my setup didn't work correctly because I was using istio 1.13 (in istio 1.13 there was a guide to set up a STRICT_DNS cluster in envoyfilter). I found a workaround while still using version 1.13, which is to use the ratelimit service as a headless service ( |
because of the mixer policy was deprecated in Istio 1.5,officials suggested use envoy rate limiting instead of mixer rate limiting 。but we don't have any document to guide us how to configure envoyfilter support ratelimit, the native envoy ratelimit configure like this:
but how to configure istio envoyfilter make it work?
The text was updated successfully, but these errors were encountered: