-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
node v0.11.15+ should use pauseOnConnect option with net.createServer #10
Comments
Ironically, |
Ah, I must have missed those issues on my initial search! It turns out from further testing that even with pauseOnConnect I'm seeing requests not making it through to the workers. Connections seem to be make it to workers and emitted, but requests sometimes do not (based on logging in event listeners in the workers. Not sure if you've gotten any further with this? |
It's hard to tell what the problem is without looking at code or a reproducible test case. I'd like to know more about this though:
"Connections" are actually sockets resulting from accepting an incoming connection. Sockets are file descriptors, which are the only thing you can pass. Requests however are higher level objects and I doubt you can pass them to workers at all. If you could elaborate, or post a simple test can I could reproduce, I can help you figuring out what's wrong. But otherwise it would be shooting in the dark. :) |
Thanks elad, to simplify the issue I took your code (with the new pauseOnConnect flag!) and added a simple hello world end point:
I also restricted to just one worker to remove some variables:
With these changes I ran siege to do some basic stress testing and saw a high number of failed transactions:
If you make a minor tweak to allow direct communication with the worker:
and run siege directly against a worker, you get much better results in terms of failed transactions:
I set up some logging off of the worker event listeners to see what was going on:
And I see that the final counts when testing against the master show that while all connection events are fired, not all connections events result in request events. This makes me think the sockets/fds are closed before they are able to be read or something else is being clobbered. This is mere speculation on my part at this point:
In the end I think I'll most likely externalize the load balancing/sticky sessions as I may need to do some horizontal scaling in the future as well, but it'd be nice to figure out what's going on here. Thanks again for all your help! |
That sounds like a bug in node.js. Have you tried running tcpdump to see where connections get dropped and how? |
Reopening |
@zhonked do you want to submit an issue and test code to node.js/io.js? I think this is an important issue; if you don't have the time I'll make some and take care of it. :) |
@elad, sorry about being so late on this... I got tied up with a few things, I'll put together something in the next day or two and submit it and link it here in case you have some extra insight to add. Thanks for the nudge :) |
Time got a way from me again... but I posted what I could to the node repo: nodejs/node-v0.x-archive#25594 Hopefully I'll actually find some time to dig into this a bit more in the near future. |
Thanks for creating this helpful guide, I was running into some performance degradation after following the guide in node v0.12.2, but found out there was a pauseOnConnect option added to net.createServer that turned out to be the main issue. I didn't see this info anywhere after much googling so hopefully this will be of use to someone else.
Also added here: indutny/sticky-session#25
The text was updated successfully, but these errors were encountered: