-
-
Notifications
You must be signed in to change notification settings - Fork 325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: portforward connection cleanup #973
Conversation
Codecov Report
@@ Coverage Diff @@
## master #973 +/- ##
==========================================
- Coverage 72.58% 72.12% -0.47%
==========================================
Files 64 64
Lines 4505 4534 +29
==========================================
Hits 3270 3270
- Misses 1235 1264 +29
|
90a6e78
to
5a32a22
Compare
I think we should stop
I don't think it's necessary to call |
04818a3
to
67d7821
Compare
70c8b8c
to
491bbf2
Compare
Ok I think I'm there now! Would you mind taking another look? |
491bbf2
to
a3e0e1f
Compare
In some situations it was observed that connections lingered in a CLOSE_WAIT state or even ESTABLISHED state. In others, an existing connection would be reused and would fail at the time of use. When the reader half is closed, signal it to the forwarder_loop task. When a websocket close message is received, signal it to the forwarder_loop task so it may shutdown all write half's. Close the non-taken streams when joining so we may terminate. Signed-off-by: Tiago Castro <[email protected]>
a3e0e1f
to
a6208f1
Compare
Just a minor note (especially since this PR is fairly small), but please don't force-push to a PR that is already being reviewed. It makes it much harder to keep track of review progress. |
288: feat(k8s): add proxy library for http and tcp r=tiagolobocastro a=tiagolobocastro feat(k8s): add proxy library with portforward Add new k8s proxy library which is capable of setting up port forwarding to a port. A server is bound to a local port and each connection creates a new port forward to the configured pod target. Based on the kube-rs port bind example :+1 Found a few bugs with the port-forward and raised a PR to attempt fixing them: kube-rs/kube#973 Signed-off-by: Tiago Castro <[email protected]> Co-authored-by: Tiago Castro <[email protected]>
Hey, sorry. I'm honestly not quite comfortable reviewing this without a better way to test what's going on. I was working on that but did, indeed, end up forgetting about the PR. Sorry... :/ |
No problem at all! Was just checking :) Anyway I can reproduce the half-closed connections: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, finally got back to this (sorry!), and can confirm both the issue and that the PR fixes the issue.
A few nits, but happy to merge after that!
I uploaded my repro to https://gitlab.com/teozkr/repros/-/tree/kube/portfwd-halfclose |
Awesome thank you! |
There are dozens of us! |
Use a ChannelState with initialized and shutdown flags rather than 2 vecs. Drop the errors when joining the portforward task. Break on SocketClose to make it clearer. Signed-off-by: Tiago Castro <[email protected]>
I've address the comments, please take a look - thank you. |
Make use of the channel index to reference the shutdown port index to make it consistent with the initialized check. This is because even channel indexes are for data and odd indexes are for errors. Signed-off-by: Tiago Castro <[email protected]>
Use a single channel reference to avoid indexing into the vector everytime. Signed-off-by: Tiago Castro <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, waiting for @kazk before merging since they also seemed to have opinions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Thanks!
Thanks! |
In some situations it was observed that connections lingered in
a CLOSE_WAIT state or even ESTABLISHED state.
In others, an existing connection would be reused and would fail at the time of use.
When the reader half is closed, signal it to the forwarder_loop task.
When a websocket close message is received, signal it to the forwarder_loop task
so it may shutdown all write half's.
Close the non-taken streams when joining so we may terminate.
Signed-off-by: Tiago Castro [email protected]
Motivation
In some situations it was observed that connections lingered in
a CLOSE_WAIT state or even ESTABLISHED state.
In others, an existing connection would be reused and would fail at the time of use.
Solution
When the reader half is closed, signal it to the forwarder_loop task.
When a websocket close message is received, signal it to the forwarder_loop task
so it may shutdown all write half's.
Close the non-taken streams when joining so we may terminate.
When a message is received indicating a closed socket, shutdown all write half's.