-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Response's InputStream support #15
Comments
I agree in that there's no "scalability" story here and we do in fact need streaming, not only for large output but also for continuous output, and tying into reactive processing. I think that if something like this is done, we also have to think about how/when we move towards a chunk transfer as opposed to merely holding on to sending the content length. Btw, would you willing to work on this? |
I don't mind as long as it doesn't break websockets ;) , as Hadi said though I think chunked transfer is the way to go also. Appreciating the feedback too btw. |
If nobody picks this up by the end of the week, I will look deeper into implementing this feature. Chunked transferring should be the next step in this direction. For now, stream output can be buffered (I believe P.S. I didn't investigate how web-sockets are implemented at the moment. Will keep in mind. |
So, I did some refactoring and tried not to break as much as I could.
Would you mind looking into the changes and maybe testing web sockets? I don't have much experience with Netty and might be breaking stuff. @swishy seems concerned about the websockets so maybe he has a real application to test it on. The changes are on my fork's master (last couple commits): https://github.com/PiusLT/wasabi |
No that's all good, I'm still working on the interceptor story for websockets in some cases having shared interceptors make sense in others they don't, I do have something I can test the basic functionality with rest though. ill take a look when I get 5. |
@PiusLT Thanks for the great work. Wasabi actually needs quite a bit of love and there are quite a few hacks in there, and not production ready, so your work in this area is greatly appreciated. I've looked over it and seems OK. Is it possible for you to send a PR? Do the tests run green for you? |
@PiusLT I just checked this out tonight and it doesn't seem to be working, nothing ends up back in the response, and it appears the serialisers are not being called, rolling back to current main branch and all is well again.The PreRequest and PostRequest interceptors are used, provides a hook for things like payload decryption/encryption. Also noted a number of unit tests are not happy. Is everything committed in your branch? Aside that it seems to be heading in the right direction :) |
Hmm. We'd need green first... |
I moved current file handling to zero mem cp implementation, suggest going forward we use the response 'sendBuffer' to wrap a stream or similar, this would allow us to generically handle internally a majority of requests and also the opportunity to bind proxied streaming data etc. |
Hi, is this issue still interesting? Have you guys looked at Quasar for green threads (fibers)? |
Hi @pabl0rg , yes we still have a bit of work to do in this area, Netty already provides a decent non blocking IO implementation so Quasar isn't specifically needed to resolve the current situation, its more around being able to implement multi-write-per-response support in wasabi's pipeline as per @PiusLT 's original comment, without breaking the current interceptor execution etc. @hhariri 's "green" comment was around the unit tests would need to be passing before we could merge a PR :) |
Quasar just puts a green threads verneer over netty's async. It took me a Check out this vid if you have time. It blew my mind. https://m.youtube.com/watch?v=Nmob2MB2Qo8 On Thursday, December 17, 2015, Dale Anderson [email protected]
|
PS thanks for the other clarifications :) On Thursday, December 17, 2015, Dale Anderson [email protected]
|
Right now Wasabi supports streaming files or sending a generated response which is fine for small services but it doesn't scale well. And what is worse, Wasabi does not provide and alternative.
I would like to see an option to execute multiple write operations on the same response without sending
Content-Length
or closing the socket.How I see it should work internally:
Response object should have an OutputStream where all the data goes.
send(File)
, it outputs all headers, file content and closes the stream.send(String)
, it outputs all headers, sends the string and closes the stream.send(InputStream)
, it outputs all headers, sends data from the stream until it ends and closes the internal stream.write(String)
, it outputs all headers (if not sent) and sends the string.end()
(orclose()
might be better.Closeable
andAutoCloseable
could be implemented), it sends all headers (if not sent) and closes the stream (if not closed).This would also solve (in my opinion) a problem with how
absolutePathToFileToStream
is used for streaming files. It's subjective, but using streaming interfaces (FileInputStream
,StringStream
, ...) seems like a cleaner and more agile solution.I guess HTTP's chunked transfer encoding is great for the streams. It has nothing to do with the compression (can be combined or not) and add very little overhead.
Comments on the proposal are welcomed.
The text was updated successfully, but these errors were encountered: