-
Notifications
You must be signed in to change notification settings - Fork 285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow reusable concurrency limit via GlobalConcurrencyLimit #574
Allow reusable concurrency limit via GlobalConcurrencyLimit #574
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this looks like a nice addition that makes sense to have. The ability to have a global concurrency limit used across services seems generally useful.
I'm thinking it might be possible to add without a breaking change however. Could we make the T
on ConcurrencyLimitLayer<T>
have a default type that would match the current behavior and keep new
as fn new(max: usize) -> ConcurrencyLimitLayer
?
We could then add fn new_with_semaphore(semaphore: T) -> ConcurrencyLimitLayer<T>
for users who need the new feature.
If we did that we could have to keep ServiceBuilder::concurrency_limit
as it is but doing .layer(ConcurrencyLimitLayer::new_with_semaphore(semaphore))
isn't too bad.
ping @davidpdrsn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few small docs suggestions but otherwise I think it looks good!
I'm gonna ask in Discord for someone else to have a look.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Concretely, this allows us to have a shared "max connections" limit throughout all our listeners. It also allows us to reuse the same semaphore for all services for the same "app".
I am definitely in favor of having something like this in tower
--- we have our own implementation of a global concurrency limit in linkerd2-proxy
, so it would be nice to be able to depend on this from upstream.
I'm somewhat unconvinced about the API changes in this PR. It doesn't seem immediately obvious to the user what the ToOwnedSemaphore
trait is for or how it would be used to create per-service or global shared concurrency limit middleware. As an alternative, what do we think about just adding a totally separate GlobalConcurrencyLimit
or SharedConcurrencyLimit
middleware? This would probably require a lot more code in tower
than the current approach, but I think it could be easier to understand for users, and would make the distinction between a per-instance concurrency limit and a shared concurrency limit more obvious. What do you think?
tower/src/limit/concurrency/layer.rs
Outdated
} | ||
} | ||
|
||
/// Produces an owned [`Semaphore`] (`Arc<Semaphore>`) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if we're going to have this trait be part of the API, it may be worth discussing what it does, and how users would use it to create a global concurrency limit across all instances of a stack?
tower/src/limit/concurrency/layer.rs
Outdated
/// Create a new concurrency limit layer from a type that implements [`ToOwnedSemaphore`] | ||
pub fn new_with_semaphore(semaphore: T) -> Self { | ||
ConcurrencyLimitLayer { semaphore } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not totally convinced it's necessary for this to be totally generic...we could just have a ConcurrencyLimit::new(usize)
and a ConcurrencyLimit::new_global(semaphore: Arc<Semaphore>)
or something like that? This would make it clearer to the user that one constructor is used for creating a separate concurrency limit for each service produced by the layer, and the other shares one limit across all such services produced by the layer.
I'm not sure if I can immediately come up with any use-cases for users implementing ToOwnedSemaphore
for types other than usize
or Arc<Semaphore>
...but if there is a reason to, we can always add a more generic constructor later?
tower/src/limit/concurrency/layer.rs
Outdated
/// Returns an owned [`Semaphore`] for the type | ||
fn to_owned_semaphore(&self) -> Arc<Semaphore>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAICT, this trait will basically only ever be implemented for usize
and Arc<Semaphore>
, right? It doesn't seem like there's a lot that users could do with implementations for other types?
Hypothetically, I could maybe see a situation where the concurrency limiting policy depends on the service being wrapped in some way, as a use-case for custom implementations, but the current trait doesn't make that possible. Perhaps it could take a reference to the inner service when producing the semaphore...but I'm still not sure if something like that would actually be useful.
I would generally try to avoid exposing a trait in the public API if users aren't going to be implementing themselves it or using it as a trait bound, if that's possible. It feels like the user-facing complexity could be avoided if we just had two separate layers.
Makes sense! Perhaps what was in my first commit was closer to mergeable? 99a7ae3 |
1780b07
to
e3892d4
Compare
e3892d4
to
7203c1d
Compare
I've rewritten this based on my original commit. I think this strikes the right complexity and code-reuse. Shouldn't change the original API at all and allows creating a |
tower/src/limit/concurrency/layer.rs
Outdated
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can | ||
/// be reused across services. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The word "variant" makes a think of an enum but I don't have any better suggestions 🤷
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about just
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can | |
/// be reused across services. | |
/// Unlike [`ConcurrencyLimitLayer`], which enforces a per-service concurrency | |
/// limit, this layer accepts a owned semaphore (`Arc<Semaphore>`) which can be | |
/// shared across multiple services. |
or something?
Co-authored-by: David Pedersen <[email protected]>
64a7ce2
to
0af78aa
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, this feels a lot more straightforward to use than the previous implementation with ToOwnedSemaphore
!
i had some thoughts on whether or not we should expose tokio::sync::Semaphore
in tower's public API. on one hand, it's probably better to hide the implementation detail, but on the other, if we didn't expose it in constructors, a global concurrency limit could only be shared between services using the Layer
...i'm open to being convinced either way on this.
i also left a few docs suggestions.
tower/src/limit/concurrency/layer.rs
Outdated
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can | ||
/// be reused across services. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how about just
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can | |
/// be reused across services. | |
/// Unlike [`ConcurrencyLimitLayer`], which enforces a per-service concurrency | |
/// limit, this layer accepts a owned semaphore (`Arc<Semaphore>`) which can be | |
/// shared across multiple services. |
or something?
tower/src/limit/concurrency/layer.rs
Outdated
/// Create a new `GlobalConcurrencyLimitLayer`. | ||
pub fn new(semaphore: Arc<Semaphore>) -> Self { | ||
GlobalConcurrencyLimitLayer { semaphore } | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not totally sure why this has to take an Arc<Semaphore>
...couldn't it alternatively just have a constructor that takes the maximum number of concurrent requests, like ConcurrencyLimitLayer
? Then, instead of cloning the semaphore and constructing a new ConcurrencyLimitLayer
for each service, you could just clone the concurrency limit layer.
The advantage of this is that it doesn't put the tokio::sync
type in the public API of GlobalConcurrencyLimitLayer
. I feel like ideally, the semaphore should be just an implementation detail, not part of the public API. I can't immediately think of a reason we would ever want to change this code not to use tokio::sync::Semaphore
, but it feels a little better not to expose it publicly, IMO...
} | ||
|
||
/// Create a new concurrency limiter with a provided shared semaphore | ||
pub fn new_shared(inner: T, semaphore: Arc<Semaphore>) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpicky, TIOLI: if we're going to have this API, i might prefer naming it something like:
pub fn new_shared(inner: T, semaphore: Arc<Semaphore>) -> Self { | |
pub fn with_semaphore(inner: T, semaphore: Arc<Semaphore>) -> Self { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on one hand, like my suggestion above, not having tokio::sync
's semaphore in the public API might be theoretically nicer, hiding the implementation details. on the other hand, this API is the only way we can share a semaphore acrossConcurrencyLimit
services without using the Layer
, so maybe having this constructor is worth having the semaphore exposed publicly.
if we decide to keep this public, and i think maybe we should, it's fine for the GlobalConcurrencyLimitLayer
constructor to take a semaphore as it does currently...
I think we might have to keep exposing that I don't actually use the I made adjustments! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me, i commented on a couple tiny docs nits
@davidpdrsn done! |
tower-rs/tower#574 added a `GlobalConcurrencyLimitLayer` which has the same behavior as our `linkerd-concurrency-limit` crate (sharing a single semaphore across all instances of a service). This was published in Tower 0.4.8, and now that we've updated to that version, we can remove our implementation in favour of upstream. This has the nice advantage of "improving" code coverage by laundering the responsibility for testing this code to upstream. And unlike us, upstream [actually has tests][1]! [1]: https://github.com/tower-rs/tower/blob/74f9047f309d16fa25978e34ea86d639383c02af/tower/tests/limit/concurrency.rs
tower-rs/tower#574 added a `GlobalConcurrencyLimitLayer` which has the same behavior as our `linkerd-concurrency-limit` crate (sharing a single semaphore across all instances of a service). This was published in Tower 0.4.8, and now that we've updated to that version, we can remove our implementation in favor of upstream.
We've been needing this to share the same
Arc<Semaphore>
from multiple listeners.Concretely, this allows us to have a shared "max connections" limit throughout all our listeners. It also allows us to reuse the same semaphore for all services for the same "app".
I've added a
ToOwnedSemaphore
trait and implemented it forusize
andArc<Semaphore>
.This is a breaking change for people explicitly creating a
ConcurrencyLimitLayer
. It now requires a type implementingToOwnedSemaphore
to be specified.