Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow reusable concurrency limit via GlobalConcurrencyLimit #574

Merged
merged 7 commits into from
May 28, 2021

Conversation

jeromegn
Copy link
Contributor

We've been needing this to share the same Arc<Semaphore> from multiple listeners.

Concretely, this allows us to have a shared "max connections" limit throughout all our listeners. It also allows us to reuse the same semaphore for all services for the same "app".

I've added a ToOwnedSemaphore trait and implemented it for usize and Arc<Semaphore>.

This is a breaking change for people explicitly creating a ConcurrencyLimitLayer. It now requires a type implementing ToOwnedSemaphore to be specified.

Copy link
Member

@davidpdrsn davidpdrsn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this looks like a nice addition that makes sense to have. The ability to have a global concurrency limit used across services seems generally useful.

I'm thinking it might be possible to add without a breaking change however. Could we make the T on ConcurrencyLimitLayer<T> have a default type that would match the current behavior and keep new as fn new(max: usize) -> ConcurrencyLimitLayer?

We could then add fn new_with_semaphore(semaphore: T) -> ConcurrencyLimitLayer<T> for users who need the new feature.

If we did that we could have to keep ServiceBuilder::concurrency_limit as it is but doing .layer(ConcurrencyLimitLayer::new_with_semaphore(semaphore)) isn't too bad.

@jeromegn
Copy link
Contributor Author

jeromegn commented Apr 6, 2021

ping @davidpdrsn

Copy link
Member

@davidpdrsn davidpdrsn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few small docs suggestions but otherwise I think it looks good!

I'm gonna ask in Discord for someone else to have a look.

tower/src/limit/concurrency/layer.rs Outdated Show resolved Hide resolved
tower/src/limit/concurrency/layer.rs Outdated Show resolved Hide resolved
tower/src/limit/concurrency/layer.rs Outdated Show resolved Hide resolved
Copy link
Member

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concretely, this allows us to have a shared "max connections" limit throughout all our listeners. It also allows us to reuse the same semaphore for all services for the same "app".

I am definitely in favor of having something like this in tower --- we have our own implementation of a global concurrency limit in linkerd2-proxy, so it would be nice to be able to depend on this from upstream.

I'm somewhat unconvinced about the API changes in this PR. It doesn't seem immediately obvious to the user what the ToOwnedSemaphore trait is for or how it would be used to create per-service or global shared concurrency limit middleware. As an alternative, what do we think about just adding a totally separate GlobalConcurrencyLimit or SharedConcurrencyLimit middleware? This would probably require a lot more code in tower than the current approach, but I think it could be easier to understand for users, and would make the distinction between a per-instance concurrency limit and a shared concurrency limit more obvious. What do you think?

}
}

/// Produces an owned [`Semaphore`] (`Arc<Semaphore>`)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we're going to have this trait be part of the API, it may be worth discussing what it does, and how users would use it to create a global concurrency limit across all instances of a stack?

Comment on lines 25 to 27
/// Create a new concurrency limit layer from a type that implements [`ToOwnedSemaphore`]
pub fn new_with_semaphore(semaphore: T) -> Self {
ConcurrencyLimitLayer { semaphore }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not totally convinced it's necessary for this to be totally generic...we could just have a ConcurrencyLimit::new(usize) and a ConcurrencyLimit::new_global(semaphore: Arc<Semaphore>) or something like that? This would make it clearer to the user that one constructor is used for creating a separate concurrency limit for each service produced by the layer, and the other shares one limit across all such services produced by the layer.

I'm not sure if I can immediately come up with any use-cases for users implementing ToOwnedSemaphore for types other than usize or Arc<Semaphore>...but if there is a reason to, we can always add a more generic constructor later?

Comment on lines 44 to 45
/// Returns an owned [`Semaphore`] for the type
fn to_owned_semaphore(&self) -> Arc<Semaphore>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAICT, this trait will basically only ever be implemented for usize and Arc<Semaphore>, right? It doesn't seem like there's a lot that users could do with implementations for other types?

Hypothetically, I could maybe see a situation where the concurrency limiting policy depends on the service being wrapped in some way, as a use-case for custom implementations, but the current trait doesn't make that possible. Perhaps it could take a reference to the inner service when producing the semaphore...but I'm still not sure if something like that would actually be useful.

I would generally try to avoid exposing a trait in the public API if users aren't going to be implementing themselves it or using it as a trait bound, if that's possible. It feels like the user-facing complexity could be avoided if we just had two separate layers.

@jeromegn
Copy link
Contributor Author

jeromegn commented Apr 8, 2021

Makes sense!

Perhaps what was in my first commit was closer to mergeable? 99a7ae3

@davidpdrsn
Copy link
Member

@jeromegn yeah seems a bit closer to what @hawkw is suggesting.

I would probably keep ConcurrencyLimit completely as is and make a separate new middleware called GlobalConcurrencyLimit.

@jeromegn jeromegn force-pushed the concurrency-new-semaphore branch from 1780b07 to e3892d4 Compare May 7, 2021 12:41
@jeromegn jeromegn force-pushed the concurrency-new-semaphore branch from e3892d4 to 7203c1d Compare May 7, 2021 12:44
@jeromegn
Copy link
Contributor Author

jeromegn commented May 7, 2021

I've rewritten this based on my original commit. I think this strikes the right complexity and code-reuse. Shouldn't change the original API at all and allows creating a ConcurrencyLimit with a owned semaphore directly or through the new GlobalConcurrencyLimitLayer.

@jeromegn jeromegn requested a review from hawkw May 7, 2021 13:23
tower/src/limit/concurrency/service.rs Outdated Show resolved Hide resolved
tower/src/limit/concurrency/layer.rs Outdated Show resolved Hide resolved
Comment on lines 31 to 32
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can
/// be reused across services.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The word "variant" makes a think of an enum but I don't have any better suggestions 🤷

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about just

Suggested change
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can
/// be reused across services.
/// Unlike [`ConcurrencyLimitLayer`], which enforces a per-service concurrency
/// limit, this layer accepts a owned semaphore (`Arc<Semaphore>`) which can be
/// shared across multiple services.

or something?

@jeromegn jeromegn force-pushed the concurrency-new-semaphore branch from 64a7ce2 to 0af78aa Compare May 7, 2021 14:10
Copy link
Member

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, this feels a lot more straightforward to use than the previous implementation with ToOwnedSemaphore!

i had some thoughts on whether or not we should expose tokio::sync::Semaphore in tower's public API. on one hand, it's probably better to hide the implementation detail, but on the other, if we didn't expose it in constructors, a global concurrency limit could only be shared between services using the Layer...i'm open to being convinced either way on this.

i also left a few docs suggestions.

Comment on lines 31 to 32
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can
/// be reused across services.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about just

Suggested change
/// This variant accepts a owned semaphore (`Arc<Semaphore>`) which can
/// be reused across services.
/// Unlike [`ConcurrencyLimitLayer`], which enforces a per-service concurrency
/// limit, this layer accepts a owned semaphore (`Arc<Semaphore>`) which can be
/// shared across multiple services.

or something?

Comment on lines 39 to 42
/// Create a new `GlobalConcurrencyLimitLayer`.
pub fn new(semaphore: Arc<Semaphore>) -> Self {
GlobalConcurrencyLimitLayer { semaphore }
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not totally sure why this has to take an Arc<Semaphore>...couldn't it alternatively just have a constructor that takes the maximum number of concurrent requests, like ConcurrencyLimitLayer? Then, instead of cloning the semaphore and constructing a new ConcurrencyLimitLayer for each service, you could just clone the concurrency limit layer.

The advantage of this is that it doesn't put the tokio::sync type in the public API of GlobalConcurrencyLimitLayer. I feel like ideally, the semaphore should be just an implementation detail, not part of the public API. I can't immediately think of a reason we would ever want to change this code not to use tokio::sync::Semaphore, but it feels a little better not to expose it publicly, IMO...

}

/// Create a new concurrency limiter with a provided shared semaphore
pub fn new_shared(inner: T, semaphore: Arc<Semaphore>) -> Self {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpicky, TIOLI: if we're going to have this API, i might prefer naming it something like:

Suggested change
pub fn new_shared(inner: T, semaphore: Arc<Semaphore>) -> Self {
pub fn with_semaphore(inner: T, semaphore: Arc<Semaphore>) -> Self {

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on one hand, like my suggestion above, not having tokio::sync's semaphore in the public API might be theoretically nicer, hiding the implementation details. on the other hand, this API is the only way we can share a semaphore acrossConcurrencyLimit services without using the Layer, so maybe having this constructor is worth having the semaphore exposed publicly.

if we decide to keep this public, and i think maybe we should, it's fine for the GlobalConcurrencyLimitLayer constructor to take a semaphore as it does currently...

@jeromegn
Copy link
Contributor Author

I think we might have to keep exposing that Semaphore, even though it would be cleaner not to.

I don't actually use the ConcurrencyLimitLayer::with_semaphore publicly though, but since you're leaning towards keeping it public then it does make sense to allow building a GlobalConcurrencyLimit with a Semaphore

I made adjustments!

@jeromegn jeromegn requested a review from hawkw May 26, 2021 12:45
Copy link
Member

@hawkw hawkw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me, i commented on a couple tiny docs nits

tower/src/limit/concurrency/layer.rs Outdated Show resolved Hide resolved
tower/src/limit/concurrency/layer.rs Outdated Show resolved Hide resolved
@davidpdrsn
Copy link
Member

@jeromegn thanks for working on this! Do you wanna update the changelog as well? Then we can merge it.

@jeromegn
Copy link
Contributor Author

@davidpdrsn done!

@jeromegn jeromegn changed the title Allow more flexible ConcurrencyLimit input via ToOwnedSemaphore Allow more flexible ConcurrencyLimit via GlobalConcurrencyLimit May 28, 2021
@jeromegn jeromegn changed the title Allow more flexible ConcurrencyLimit via GlobalConcurrencyLimit Allow reusable concurrency limit via GlobalConcurrencyLimit May 28, 2021
@davidpdrsn davidpdrsn merged commit 74f9047 into tower-rs:master May 28, 2021
davidpdrsn added a commit that referenced this pull request May 28, 2021
0.4.8 (May 28, 2021)

- **builder**: Add `ServiceBuilder::map_result` analogous to
  `ServiceExt::map_result`.
- **limit**: Add `GlobalConcurrencyLimitLayer` allowing to reuse concurrency
  limit across multiple services ([#574])

[#574]: #574
hawkw added a commit to linkerd/linkerd2-proxy that referenced this pull request Jun 17, 2021
tower-rs/tower#574 added a `GlobalConcurrencyLimitLayer` which has the
same behavior as our `linkerd-concurrency-limit` crate (sharing a single
semaphore across all instances of a service). This was published in
Tower 0.4.8, and now that we've updated to that version, we can remove
our implementation in favour of upstream.

This has the nice advantage of "improving" code coverage by laundering
the responsibility for testing this code to upstream. And unlike us,
upstream [actually has tests][1]!

[1]: https://github.com/tower-rs/tower/blob/74f9047f309d16fa25978e34ea86d639383c02af/tower/tests/limit/concurrency.rs
olix0r pushed a commit to linkerd/linkerd2-proxy that referenced this pull request Jun 17, 2021
tower-rs/tower#574 added a `GlobalConcurrencyLimitLayer` which has the
same behavior as our `linkerd-concurrency-limit` crate (sharing a single
semaphore across all instances of a service). This was published in
Tower 0.4.8, and now that we've updated to that version, we can remove
our implementation in favor of upstream.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants