Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

at processTimers (node:internal/timers:514:7) { code: 'ERR_INTERNAL_ASSERTION' } #50233

Closed
james-rms opened this issue Oct 18, 2023 · 10 comments
Closed
Labels
confirmed-bug Issues with confirmed bugs. net Issues and PRs related to the net subsystem.

Comments

@james-rms
Copy link

james-rms commented Oct 18, 2023

Version

v20.6.1

Platform

Darwin 192-168-1-108.tpgi.com.au 22.4.0 Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000 arm64

Subsystem

node:internal/timers

What steps will reproduce the bug?

I'm still working on simplifying this repro - currently it depends on the node-fetch library, my version is v2.6.4.

Running node repro.js and waiting about a minute yields the results in repro.txt on my machine. I've included the original typescript for clarity as well.
repro.zip

How often does it reproduce? Is there a required condition?

Reliably with this repro script.

What is the expected behavior? Why is that the expected behavior?

I would expect this script to either hang or fail with an exception about running out of file descriptors or RAM.

What do you see instead?

{"t":"2023-10-18T03:00:22.029Z","reason":{"type":"aborted","message":"The user aborted a request."}}
{"t":"2023-10-18T03:00:31.887Z","reason":{"type":"aborted","message":"The user aborted a request."}}
{"t":"2023-10-18T03:00:37.817Z","reason":{"type":"aborted","message":"The user aborted a request."}}
node:internal/assert:14
    throw new ERR_INTERNAL_ASSERTION(message);
    ^

Error [ERR_INTERNAL_ASSERTION]: This is caused by either a bug in Node.js or incorrect usage of Node.js internals.
Please open an issue with this stack trace at https://github.com/nodejs/node/issues

    at new NodeError (node:internal/errors:405:5)
    at assert (node:internal/assert:14:11)
    at internalConnectMultiple (node:net:1118:3)
    at Timeout.internalConnectMultipleTimeout (node:net:1687:3)
    at listOnTimeout (node:internal/timers:575:11)
    at process.processTimers (node:internal/timers:514:7) {
  code: 'ERR_INTERNAL_ASSERTION'
}

Additional information

Since posting this i've tried eliminating the node-fetch dependency and using node's builtin fetch, and I can't reproduce this issue with fetch. Not sure what the major differences are that cause this.

@bnoordhuis
Copy link
Member

Paging @ShogunPanda.

@bnoordhuis
Copy link
Member

But @james-rms, can you try with the latest v20.x release? This may have been fixed already.

@james-rms
Copy link
Author

I have reproduced on v20.8.1, which is latest at time of posting.

@james-rms
Copy link
Author

I also fail to reproduce with NODE_OPTIONS=--no-network-family-autoselection, which was given as a workaround in the similar #47644

@Farenheith
Copy link
Contributor

I'm having this exact error too in production on an application running on node:20-alpine, in an API that uses grpc-js. I'm really not sure how to reproduce it, so I'm downgrading the docker image to node:18-alpine to see if it keeps happening. If it is the case I'll downgrade again to node:16-alpine

@ShogunPanda
Copy link
Contributor

Can you please provide which hosts were you connecting to and how are they resolved from the connecting machine?

anyway, rather than downgrading you can temporary disable the feature with --no-network-family-autoselection (or similar. I'm typing from the phone so I can't check the exact spelling)

@Farenheith
Copy link
Contributor

Farenheith commented Nov 14, 2023

The application provides a GRPC server and connects to another one, also, it connects to about 4 REST servers, 1 redis and 1 mongo. All APIS are on AWS ECS and are accessible internally through separate load balancers with designated DNS. I hope that answers your question.

I'll take a look if I can try it on the parameter you suggested, but I'd rather try to simulate it locally if the application shows up to be stable with node 18, if I figure out how to.

Edit:
Unfortunately, I think I'll have to downgrade for other reasons: I have not only this problem with this API, but some really strange timeouts with a redis instance here that is used even by other more demanding APIs, with no timeouts at all, and I'm suspecting the library "@grpc/grpc-js" is messing up the network, as other GRPC Services here are using an old legacy lib "grpc", and I couldn't find any other difference between them. The problem with all of this is that the legacy lib only works up to node 16, so, at least I need to exclude this variable first to keep investigating.

The error on this thread didn't happen on node 18 for me, though.
Also, the difference between @grpc/grpc-js and grpc is that the newer is a pure javascript implementation of the gRPC protocol, using node internal http2 to do it, while grpc implements the protocol in C++, I think, and link it to NodeJs

@ShogunPanda
Copy link
Contributor

I see. At least you gave me a little more context, which will help.
I hope we can have you update soon!

@tniessen tniessen added confirmed-bug Issues with confirmed bugs. net Issues and PRs related to the net subsystem. labels Dec 2, 2023
@ShogunPanda
Copy link
Contributor

This should have been fixed in #51045. Once it gets in 21.x or 20.x please let me know if you have additional problems.

@levialkalidamat
Copy link

Hello, in my case the problem was that I was issuing too many requests with my free Cloudinary plan and the server was blocking certain requests. to get around this I used the VPN (proton VPN) and I no longer have a problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
confirmed-bug Issues with confirmed bugs. net Issues and PRs related to the net subsystem.
Projects
None yet
Development

No branches or pull requests

6 participants