-
-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aiohttp 2.x closing client request data stream #1907
Comments
You can backup the stream to a private variable for the retries. Or have a list of the streams for the retries and then remove the closed ones. |
I can't backup the stream because aiohttp is modifying the stream ref that gets passed in. |
Hmm other than subclassing ClientSession or some other class you use that does that to override that particular function that closes the stream. I have no other ideas. |
ya would have to override close, but that's a hack, I still believe aiohttp shouldn't close streams it's passed in like pre 2.x. Would like to hear why that decision was made, or perhaps it was an oversight and this is a bug. |
@thehesiod could you point to code that closes stream? |
sure, stream is wrapped here:
and closed here:
|
botocore expects to be able to rewind the streams between requests, like it does with urllib |
ok, I see. this is about files, I thought it was about StreamReader. it was intentional decision, closing files seems like a good idea, and this code simplifies usage of resources. we can remove this code, or we can add some options for skipping close operation. @asvetlov what do you think? |
Well about streams in general. For whatever reason botocore wraps bytes passed to it in a byte stream. Ya the decision is if aiohttp should take full ownership of the steam or not. Given aiohttp did not create the steam i think not...But will let you guys decide as long as there's an option to not close it :) |
Also think about parity with requests/urllib |
I think it actually takes ownership. Also it make usage much simpler, otherwise everyone needs manually maintain resources, which is not very ergonomic. You can wrap your bytes stream into custom payload impl, that should be easy. |
I just verified requests does NOT close the stream as I would have expected. If someone creates the stream they are explicit owners of said stream and should handle closing as they are the only ones that know when it should be closed. Only if they tell aiohttp to close it should it be closed. Otherwise aiohttp cannot know if the caller wants to use the stream again after the request. |
further this seems to be an undocumented change in behavior: http://aiohttp.readthedocs.io/en/latest/migration.html |
Could you explain how closing the steam is safer? There really is no difference between asyncio and sync. In either case you can have multiple simultaneous readers against the same stream. There are two cases: Stream was fully read (get end of stream error on re-use), file was partially or not read (get error thrown from aiohttp method). So I'm not sure what this new behavior is preventing. Other than that the object is ref'd and gc'd so will get destroyed. I'm also guessing that most streams used will be seekable (files and mem) and are designed to be reused. |
I want to resurrect the issue. |
On one hand @fafhrd91 is right: moving stream ownership to aiohttp is easier and safe in at least 95% of use cases. Otherwise people should use construction like this:
Too many nested context managers to be used properly by average software developer. What about adding opt-in parameter for not transferring ownership to aiohttp? |
Too many parameters. Custom payload class is 5 lines of code. -1 |
Yes, it's true. @thehesiod might pass custom payload for his needs. |
@fafhrd91 please keep calm. I'm not very familiar with payload API, sorry. |
If you are not familiar then why change my decisions? Helping everyone == crappy api. |
I don't want to change your decision at all, don't get me wrong. Let's make a pause. P.S. |
I have no urgent need for this feature right now as I right now wrap our stream when I send it to aiohttp so when aiohttp calls close it does nothing. This just feels very hacky. If I were to wrap the payload I believe I would still have to do this as aiohttp unconditionally calls close. If calling "close" were a good idea by http libraries you would have thought requests (which has been around for a lot longer) would have done this as well. I still haven't been presented with any evidence what problem this solves, and in fact presented evidence with an issue it created. |
If this is representative of how the |
Closing the issue. It is a year old. |
@jmehnle thank you for attracting attention to the outdated issue. |
Better to close this and acknowledge that the last word has been spoken than get people's hopes up. :) |
@jmehnle sorry, I don't follow your message. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a [new issue] for related bugs. |
In previous versions of aiohttp, if you passed a stream as the
data
parameter to an aiohttp request, it would not close the stream after the request, now it is closing it because it's wrapping it in aPayload
, and later thewrite
method closes it.I believe aiohttp should not be closing the stream sent to it. This new behavior caused this issue with aiobotocore: aio-libs/aiobotocore#221 resulting in retries failing.
The text was updated successfully, but these errors were encountered: