Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response's InputStream support #15

Open
pijusn opened this issue Jan 17, 2015 · 13 comments
Open

Response's InputStream support #15

pijusn opened this issue Jan 17, 2015 · 13 comments

Comments

@pijusn
Copy link

pijusn commented Jan 17, 2015

Right now Wasabi supports streaming files or sending a generated response which is fine for small services but it doesn't scale well. And what is worse, Wasabi does not provide and alternative.

I would like to see an option to execute multiple write operations on the same response without sending Content-Length or closing the socket.

How I see it should work internally:
Response object should have an OutputStream where all the data goes.

  • When you use send(File), it outputs all headers, file content and closes the stream.
  • When you use send(String), it outputs all headers, sends the string and closes the stream.
  • When you use send(InputStream), it outputs all headers, sends data from the stream until it ends and closes the internal stream.
  • When you use write(String), it outputs all headers (if not sent) and sends the string.
  • When you call end() (or close() might be better. Closeable and AutoCloseable could be implemented), it sends all headers (if not sent) and closes the stream (if not closed).

This would also solve (in my opinion) a problem with how absolutePathToFileToStream is used for streaming files. It's subjective, but using streaming interfaces (FileInputStream, StringStream, ...) seems like a cleaner and more agile solution.

I guess HTTP's chunked transfer encoding is great for the streams. It has nothing to do with the compression (can be combined or not) and add very little overhead.

Comments on the proposal are welcomed.

@hhariri
Copy link
Collaborator

hhariri commented Jan 19, 2015

I agree in that there's no "scalability" story here and we do in fact need streaming, not only for large output but also for continuous output, and tying into reactive processing. I think that if something like this is done, we also have to think about how/when we move towards a chunk transfer as opposed to merely holding on to sending the content length.

Btw, would you willing to work on this?

@swishy
Copy link
Collaborator

swishy commented Jan 19, 2015

I don't mind as long as it doesn't break websockets ;) , as Hadi said though I think chunked transfer is the way to go also. Appreciating the feedback too btw.

@pijusn
Copy link
Author

pijusn commented Jan 20, 2015

If nobody picks this up by the end of the week, I will look deeper into implementing this feature.

Chunked transferring should be the next step in this direction. For now, stream output can be buffered (I believe Content-Length is only required for responses when connection is being kept alive (HTTP/1.1 section 4.4)). Later it can be dealt with in different ways.

P.S. I didn't investigate how web-sockets are implemented at the moment. Will keep in mind.

@pijusn
Copy link
Author

pijusn commented Jan 29, 2015

So, I did some refactoring and tried not to break as much as I could.

  1. It looked like Wasabi didn't execute PreRequest interceptors with websockets but tried executing PostRequest interceptors. But I think it was never really used with PostRequest interceptors because request object was initialized after websocket requests are handled. I might be missing something there. For the moment, I just commented it out.
  2. I moved some logic out of NettyRequestHandler and it seems to work with HTTP requests.
  3. I didn't implement streams because it seemed like serious refactoring is required before streams can be implemented.

Would you mind looking into the changes and maybe testing web sockets? I don't have much experience with Netty and might be breaking stuff. @swishy seems concerned about the websockets so maybe he has a real application to test it on.

The changes are on my fork's master (last couple commits): https://github.com/PiusLT/wasabi

@swishy
Copy link
Collaborator

swishy commented Jan 29, 2015

No that's all good, I'm still working on the interceptor story for websockets in some cases having shared interceptors make sense in others they don't, I do have something I can test the basic functionality with rest though. ill take a look when I get 5.

@hhariri
Copy link
Collaborator

hhariri commented Jan 30, 2015

@PiusLT Thanks for the great work. Wasabi actually needs quite a bit of love and there are quite a few hacks in there, and not production ready, so your work in this area is greatly appreciated. I've looked over it and seems OK. Is it possible for you to send a PR? Do the tests run green for you?

@swishy
Copy link
Collaborator

swishy commented Jan 30, 2015

@PiusLT I just checked this out tonight and it doesn't seem to be working, nothing ends up back in the response, and it appears the serialisers are not being called, rolling back to current main branch and all is well again.The PreRequest and PostRequest interceptors are used, provides a hook for things like payload decryption/encryption. Also noted a number of unit tests are not happy. Is everything committed in your branch? Aside that it seems to be heading in the right direction :)

@hhariri
Copy link
Collaborator

hhariri commented Jan 30, 2015

Hmm. We'd need green first...

@swishy
Copy link
Collaborator

swishy commented Jul 24, 2015

I moved current file handling to zero mem cp implementation, suggest going forward we use the response 'sendBuffer' to wrap a stream or similar, this would allow us to generically handle internally a majority of requests and also the opportunity to bind proxied streaming data etc.

@pabl0rg
Copy link

pabl0rg commented Dec 16, 2015

Hi, is this issue still interesting? Have you guys looked at Quasar for green threads (fibers)?

@swishy
Copy link
Collaborator

swishy commented Dec 17, 2015

Hi @pabl0rg , yes we still have a bit of work to do in this area, Netty already provides a decent non blocking IO implementation so Quasar isn't specifically needed to resolve the current situation, its more around being able to implement multi-write-per-response support in wasabi's pipeline as per @PiusLT 's original comment, without breaking the current interceptor execution etc. @hhariri 's "green" comment was around the unit tests would need to be passing before we could merge a PR :)

@pabl0rg
Copy link

pabl0rg commented Dec 17, 2015

Quasar just puts a green threads verneer over netty's async. It took me a
while to understand that. But it might simplify the implementation.

Check out this vid if you have time. It blew my mind.

https://m.youtube.com/watch?v=Nmob2MB2Qo8

On Thursday, December 17, 2015, Dale Anderson [email protected]
wrote:

Hi @pabl0rg https://github.com/pabl0rg , yes we still have a bit of
work to do in this area, Netty already provides a decent non blocking IO
implementation so Quasar isn't specifically needed to resolve the current
situation, its more around being able to implement multi-write-per-response
support in wasabi's pipeline as per @PiusLT https://github.com/PiusLT
's original comment, without breaking the current interceptor execution
etc. @hhariri https://github.com/hhariri 's "green" comment was around
the unit tests would need to be passing before we could merge a PR :)


Reply to this email directly or view it on GitHub
#15 (comment).

@pabl0rg
Copy link

pabl0rg commented Dec 17, 2015

PS thanks for the other clarifications :)

On Thursday, December 17, 2015, Dale Anderson [email protected]
wrote:

Hi @pabl0rg https://github.com/pabl0rg , yes we still have a bit of
work to do in this area, Netty already provides a decent non blocking IO
implementation so Quasar isn't specifically needed to resolve the current
situation, its more around being able to implement multi-write-per-response
support in wasabi's pipeline as per @PiusLT https://github.com/PiusLT
's original comment, without breaking the current interceptor execution
etc. @hhariri https://github.com/hhariri 's "green" comment was around
the unit tests would need to be passing before we could merge a PR :)


Reply to this email directly or view it on GitHub
#15 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants