-
Notifications
You must be signed in to change notification settings - Fork 621
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more aggressive rate limiting for publishing new crates #1681
Conversation
☔ The latest upstream changes (presumably #1682) made this pull request unmergeable. Please resolve the merge conflicts. |
|
Sorry, I meant 10 minutes not 10 seconds (Our current rate limit is 1req/min) |
Can you elaborate what you're imagining for exponential back-off for rate limiting? |
Aha, that's much better!
Conveniently, the git index updates are processed in a background queue now, so we could limit the number of pending new crate publish requests a particular user has, and we could add artificial delay to each additional new crate publish request that comes from that user. So we get 30 publish requests that we let go through fine, then the 31st gets a 1 second delay, then the 32nd gets a 2 second delay, 33rd=4sec, etc until it gets too slow for them to care to continue publishing crates and the queue could page someone. |
Unfortunately this would allow an attacker to cause the database and index to diverge, for potentially large amounts of time. I'd rather avoid allowing that. I think it makes sense to reject the initial request as quickly as possible. |
4cbd958
to
1f1462d
Compare
This should be ready to go. I punted cleaning up stale buckets for now, as it's not critical to land immediately |
tests are failing |
Ugh. It's a ns vs us precision issue... |
Tests should be green on linux now. |
ed92388
to
11782dd
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great overall. Mainly I think the default values should be adjusted.
src/tests/http-data/krate_user_never_gets_more_than_max_tokens_added
Outdated
Show resolved
Hide resolved
☔ The latest upstream changes (presumably #1686) made this pull request unmergeable. Please resolve the merge conflicts. |
I think the limit we'll probably set to start is 1 req/10s with a burst of 30. The error message will tell folks they can either wait for {time until next token} or email us to get the limit increased for them. This is limited per user instead of per ip since rotating your user is harder than rotating your IP. It's stored in the DB since this is only for publishing new crates, which is slow enough already that the DB load of rate limiting there shouldn't matter. I needed to update to Rust 1.33 to get `Duration::as_millis` (note: the way we're using this feature causes UB if the rate limit is slower than 1 request per 292471208 years. I assume this is not a problem) I needed to update to Diesel 1.4.2 to get a fix for diesel-rs/diesel#2017 The algorithm used is pretty much the standard token bucket algorithm. It's *slightly* different in how we set `tokens = max(0, tokens - 1) + tokens_to_add` instead of `tokens = max(0, tokens_to_add + 1)`. This is because the usual implementation checks available tokens before subtracting them (and thus never persists if there aren't enough tokens available). Since we're doing this in a single query, and we can *only* return the final, persisted value, we have to change the calculation slightly to make sure that a user who is out of tokens gets `1` back after the rate limit. A side effect of all of this is that our token count is actually offset by 1. 0 means the user is not only out of tokens, but that we just tried to take a token and couldn't. 1 means an empty bucket, and a full bucket would technically be burst + 1. The alternative would be -1 meaning the user is actually out of tokens, but since we only ever refill the bucket when we're trying to take a token, we never actually persist a full bucket. I figured a range of 0...burst made more sense than -1..burst.
11782dd
to
676efab
Compare
@bors r=jtgeibel |
📌 Commit 676efab has been approved by |
Add more aggressive rate limiting for publishing new crates This is still incomplete, but the bulk of the code has been written so I figured I'd get some eyes on it. Right now this just panics instead of returning an error if the user is out of tokens. Still left to do are: - The two ignored test cases - Implementing the actual error type - Per-user burst rate overrides - cron job to restrict the table size and clean up stale buckets (I probably won't land this in the initial PR, our users table needs to grow by 2 orders of magnitude for this to really matter -- but I do want to land it as a followup PR since I haven't tested this with cases where now - last_update is greater than a month. It should work fine but I'd rather not have this run against poorly defined semantics) I think the limit we'll probably set to start is 1 req/10s with a burst of 30. The error message will tell folks they can either wait for {time until next token} or email us to get the limit increased for them. This is limited per user instead of per ip since rotating your user is harder than rotating your IP. It's stored in the DB since this is only for publishing new crates, which is slow enough already that the DB load of rate limiting there shouldn't matter. I needed to update to Rust 1.33 to get `Duration::as_millis` (note: the way we're using this feature causes UB if the rate limit is slower than 1 request per 292471208 years. I assume this is not a problem) I needed to update to Diesel 1.4.2 to get a fix for diesel-rs/diesel#2017 The algorithm used is pretty much the standard token bucket algorithm. It's *slightly* different in how we set `tokens = max(0, tokens - 1) + tokens_to_add` instead of `tokens = max(0, tokens_to_add + 1)`. This is because the usual implementation checks available tokens before subtracting them (and thus never persists if there aren't enough tokens available). Since we're doing this in a single query, and we can *only* return the final, persisted value, we have to change the calculation slightly to make sure that a user who is out of tokens gets `1` back after the rate limit. A side effect of all of this is that our token count is actually offset by 1. 0 means the user is not only out of tokens, but that we just tried to take a token and couldn't. 1 means an empty bucket, and a full bucket would technically be burst + 1. The alternative would be -1 meaning the user is actually out of tokens, but since we only ever refill the bucket when we're trying to take a token, we never actually persist a full bucket. I figured a range of 0...burst made more sense than -1..burst.
@bors cancel |
@bors r=jtgeibel |
📌 Commit f597a3f has been approved by |
Add more aggressive rate limiting for publishing new crates This is still incomplete, but the bulk of the code has been written so I figured I'd get some eyes on it. Right now this just panics instead of returning an error if the user is out of tokens. Still left to do are: - The two ignored test cases - Implementing the actual error type - Per-user burst rate overrides - cron job to restrict the table size and clean up stale buckets (I probably won't land this in the initial PR, our users table needs to grow by 2 orders of magnitude for this to really matter -- but I do want to land it as a followup PR since I haven't tested this with cases where now - last_update is greater than a month. It should work fine but I'd rather not have this run against poorly defined semantics) I think the limit we'll probably set to start is 1 req/10s with a burst of 30. The error message will tell folks they can either wait for {time until next token} or email us to get the limit increased for them. This is limited per user instead of per ip since rotating your user is harder than rotating your IP. It's stored in the DB since this is only for publishing new crates, which is slow enough already that the DB load of rate limiting there shouldn't matter. I needed to update to Rust 1.33 to get `Duration::as_millis` (note: the way we're using this feature causes UB if the rate limit is slower than 1 request per 292471208 years. I assume this is not a problem) I needed to update to Diesel 1.4.2 to get a fix for diesel-rs/diesel#2017 The algorithm used is pretty much the standard token bucket algorithm. It's *slightly* different in how we set `tokens = max(0, tokens - 1) + tokens_to_add` instead of `tokens = max(0, tokens_to_add + 1)`. This is because the usual implementation checks available tokens before subtracting them (and thus never persists if there aren't enough tokens available). Since we're doing this in a single query, and we can *only* return the final, persisted value, we have to change the calculation slightly to make sure that a user who is out of tokens gets `1` back after the rate limit. A side effect of all of this is that our token count is actually offset by 1. 0 means the user is not only out of tokens, but that we just tried to take a token and couldn't. 1 means an empty bucket, and a full bucket would technically be burst + 1. The alternative would be -1 meaning the user is actually out of tokens, but since we only ever refill the bucket when we're trying to take a token, we never actually persist a full bucket. I figured a range of 0...burst made more sense than -1..burst.
☀️ Test successful - checks-travis |
This is still incomplete, but the bulk of the code has been written so I figured I'd get some eyes on it. Right now this just panics instead of returning an error if the user is out of tokens. Still left to do are:
I think the limit we'll probably set to start is 1 req/10s with a burst of 30. The error message will tell folks they can either wait for {time until next token} or email us to get the limit increased for them. This is limited per user instead of per ip since rotating your user is harder than rotating your IP. It's stored in the DB since this is only for publishing new crates, which is slow enough already that the DB load of rate limiting there shouldn't matter.
I needed to update to Rust 1.33 to get
Duration::as_millis
(note: the way we're using this feature causes UB if the rate limit is slower than 1 request per 292471208 years. I assume this is not a problem)I needed to update to Diesel 1.4.2 to get a fix for diesel-rs/diesel#2017
The algorithm used is pretty much the standard token bucket algorithm. It's slightly different in how we set
tokens = max(0, tokens - 1) + tokens_to_add
instead oftokens = max(0, tokens_to_add + 1)
. This is because the usual implementation checks available tokens before subtracting them (and thus never persists if there aren't enough tokens available). Since we're doing this in a single query, and we can only return the final, persisted value, we have to change the calculation slightly to make sure that a user who is out of tokens gets1
back after the rate limit.A side effect of all of this is that our token count is actually offset by 1. 0 means the user is not only out of tokens, but that we just tried to take a token and couldn't. 1 means an empty bucket, and a full bucket would technically be burst + 1. The alternative would be -1 meaning the user is actually out of tokens, but since we only ever refill the bucket when we're trying to take a token, we never actually persist a full bucket. I figured a range of 0...burst made more sense than -1..burst.