Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding logic to extract the Rate Limiting statistics #291

Merged
merged 7 commits into from
Dec 31, 2019

Conversation

CharlyF
Copy link
Contributor

@CharlyF CharlyF commented Dec 26, 2019

In an effort to give maximum visibility over the communication between the client and the backend, we would like to introduce logic to extract the Rate Limiting stats from the header of calls made to rate limited endpoints.
More details in the official API doc.

It exposes a thread safe public method to get the values.
Additionally, we update the map associated with the Client in the method closest to the request to ensure that we don't miss calls.

}
}

func makeHeader(limit, period, reset, remaining string) *http.Response {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are your thoughts on using that same RateLimit struct as a param to this func - these four params here are strings, easy to get confused which one is which?

ratelimit.go Outdated
func (client *Client) GetRateLimitStats() map[string]RateLimit {
client.m.Lock()
defer client.m.Unlock()
return client.rateLimitingStats
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you should do a copy of the Map before returning it. Else values can be changed/updated in //

Copy link
Contributor

@gzussa gzussa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @CharlyF Looks good.

Do we need to make this a cached/async thingy?
The only benefit I see with this implementation is that you can get latest values independently from your api calls. However, are there any use cases for it?
Also, I fear we will forget to add new endpoints to the list of supported endpoints.

Have you evaluated the uncached/sync approached?

@CharlyF
Copy link
Contributor Author

CharlyF commented Dec 30, 2019

Thank you for the review @gzussa.

Do we need to make this a cached/async thingy?

I believe clients should be as lightweight and stateless as possible, we need to keep the values in memory so that one can use them anytime, in my use case I submit them as metric to be alerted if I am about to reach the limit:

Image 2019-12-30 at 3 33 15 PM

I don't think we need a full cache though.

For the async, it is critical to be able to know when this problem arises, my approach was to 1. Get the most up to date values upon demand, 2. Not contribute to the Rate Limit when requesting the statistics (i.e. we don't have a rate limiting endpoint, so you need to query a rate limited endpoint to get them). Exactly as you said:

you can get latest values independently from your api calls.

With regards to the following,

Also, I fear we will forget to add new endpoints to the list of supported endpoints.

I added logic to parse the header of every call and look for the X-RATELIMIT-, you were right, this makes us more future proof.

Copy link
Contributor

@gzussa gzussa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this latest change 💯

@CharlyF CharlyF force-pushed the adding-rate-limiting-logic branch from 3aa74ab to 678f9cf Compare December 31, 2019 16:27
@gzussa gzussa merged commit 0784a6c into zorkian:master Dec 31, 2019
@CharlyF CharlyF deleted the adding-rate-limiting-logic branch December 31, 2019 19:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants