-
-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
max-retries-exceeded exceptions are confusing #1198
Comments
Requests wraps the exception for the users convenience. The original exception is part of the message although the Traceback is misleading. I'll think about how to improve this. |
I think need a auto retry to ignore few error |
I agree this is quite confusing. Requests never retries (it sets the retries=0 for urllib3's HTTPConnectionPool), so the error would be much more obvious without the HTTPConnectionPool/MaxRetryError stuff. I didn't realize requests used urllib3 till just now, when I had to dive into the source code of both libraries to help me figure out how many retries it was doing:
Ideally the exception would just look something like this:
|
That would be ideal. The issue is with wrapping these exceptions like we do. They make for a great API but a poor debugging experience. I have an idea for how to fix it though and preserve all the information |
We'd also need to consider the case where a user does configure retries, in which case this exception is appropriate. |
Right, but there's no way to prevent a user from actually doing so. My plan, for the record, is to traverse as far downwards as possible to the lowest level exception and use that instead. The problem with @benhoyt 's example is that it seems the socket error exception is unavailable to us. (Just by looking at what he has pasted. I haven't tried to reproduce it yet and play with it.) |
@gabor 's example actually makes this easy to reproduce. Catching the exception that's raised, I did the following: >>> e
ConnectionError(MaxRetryError("HTTPConnectionPool(host='localhost', port=1111): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 111] Connection refused)",),)
>>> e.args
(MaxRetryError("HTTPConnectionPool(host='localhost', port=1111): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 111] Connection refused)",),)
>>> e.args[0].args
("HTTPConnectionPool(host='localhost', port=1111): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 111] Connection refused)",)
>>> e.args[0].args[0]
"HTTPConnectionPool(host='localhost', port=1111): Max retries exceeded with url: / (Caused by <class 'socket.error'>: [Errno 111] Connection refused)"
>>> isinstance(e.args[0].args[0], str)
True So the best we could do is only use the message stored in |
@sigmavirus24, I agree string parsing in exceptions is a terrible idea. However, urllib3's MaxRetryError already exposes a So continuing with the example above,
|
Nice catch @benhoyt. I'm not as familiar with urllib3 as I would like to be. |
If it really looks as you showed ie. then I couldn't dream of better exception, really. |
@piotr-dobrogost, the main problem (for me) was the fact that it talks about "max retries exceeded", when there's no retrying involved at all. At first I thought it was the web service I was using saying that, so I contacted them. Then, digging further, I discovered this was a urllib3 quirk. So you can see the confusion. |
Have you missed |
Yeah, you're right -- it's all there. But as I mentioned, I missed that at first, because the MaxRetryError is a red herring. |
This max retries thing always drives me mad. Does anybody mind if I dive in and see if I can't put a PR together to squash the retries message? I don't mean to appear out of nowhere, but I use requests tons in the Python work we do at Cloudant. We get pages that include the retries thing, and it can be a red herring. |
The answer is maybe. The problem is that, while by default we don't perform any retries, you can configure Requests to automatically retry failed requests. In those situations, the |
@Lukasa thanks, I'm refreshing myself on the backlog here. If I get a chance to dive in I will definitely reach out. |
It almost seems to me as if the right place for the change is in urllib3? MaxRetryError makes sense to raise in the context of automatic retries, but in the case of zero retries (perhaps the naive requests experience) it can be confusing. In urllib3 it seems the confusing errors can be triggered here via requests. It'd almost be nice to only raise a MaxRetryError when I see the urllib3 as used by requests exists in an included package -- just curious, why is that? Anyways, these were just a few ideas I wanted to toss out there. I'm still catching up on the codebases. |
Whether or not the fix belongs in urllib3 is totally down to @shazow. Given that urllib3 by default does retry (3 times IIRC), it may be that he wants to keep urllib3's behaviour as is. Pinging him to get his input. We vendor urllib3 to avoid some dependency issues. Essentially, it means we're always operating against a known version of urllib3. This has been discussed at excruciating length in #1384 and #1812 if you want the gritty details. |
Phew gritty but informative. @shazow these are just a few thoughts I had -- raising a RequestError rather than MaxRetryError as above. Really I think I better understand the MaxRetryError after checking out urlopen. Double edit: Really even just a kwarg so one can |
How about a |
Being able to distinguish between asking urlopen for no retries and having it count down to 0 on the number of retries would be useful. It is jarring seeing the MaxRetryError when you did not ask for retries. |
If anyone would like to do a patch+test for this, it would be appreciated. :) |
@shazow great, I'd be game to if I can find the cycles. I'll ping if I have anything. |
\o/ |
^I was wondering whether there was any patch released ? This issue seems to be an year old. |
Not as far as I'm aware. =) |
|
@kevinburke thoughts? |
Need a little more time Kevin Burke On Sun, Oct 5, 2014 at 10:37 AM, Ian Cordasco [email protected]
|
Yeah this has been fixed I think requests.get('http://localhost:11211')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "requests/api.py", line 60, in get
return request('get', url, **kwargs)
File "requests/api.py", line 49, in request
return session.request(method=method, url=url, **kwargs)
File "requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "requests/sessions.py", line 569, in send
r = adapter.send(request, **kwargs)
File "requests/adapters.py", line 407, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', error(61, 'Connection refused')) |
Could you please tell me how this has been resolved, since i am too getting connection refused problem at my end. In my python script i am trying to connect RPC server |
@SiddheshS This issue was fixed by rewording some exceptions: it has nothing to do with the actual connection refused error. To ask for help with a problem you should consider using Stack Overflow. |
I encountered the same problem . it happened occasionally. how to fix,is there anyone can help me ? .thanks. requests.exceptions.ConnectionError: HTTPSConnectionPool(host='api.xxxx.com', port=443): Max retries exceeded with url: /v2/goods/?category=0&sort_type=2&page_size=3&page_num=13&t=0&count=110 (Caused by NewConnectionError('<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7f033a2c2590>: Failed to establish a new connection: [Errno 110] Connection timed out',)) |
@nkjulia The connection attempt is timing out, which suggests that the remote server is overloaded or that your connection timeout is too low. |
I also misguided by this.... |
@kevinburke how did your issue resolved after getting connection refused error ? Could you please advice buddy. TIA |
Ignore my post mate. I had multiple version of pythons in my machine due to which it wasn't able to pick the right one and was throwing error. Posting this thinking it may be helpful for someone. |
If this issue has been solved,please give me some advise. |
hi,
for example:
(assuming nothing is listening on port 1111)
the exception says "Max retries exceeded". i found this confusing because i did not specify any retry-related params. in fact, i am unable to find any documentation about specifying the retry-count. after going through the code, it seems that urllib3 is the underlying transport, and it is called with max_retries=0 (so in fact there are no retries). and requests simply wraps the exception. so it is understandable, but it confuses the end-user (end-developer)? i think something better should be done here, especially considering that it is very easy to get this error.
The text was updated successfully, but these errors were encountered: