Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot call write after a stream was destroyed #374

Closed
alanshaw opened this issue Jun 28, 2019 · 4 comments · Fixed by ipfs/js-ipfs#2245
Closed

Cannot call write after a stream was destroyed #374

alanshaw opened this issue Jun 28, 2019 · 4 comments · Fixed by ipfs/js-ipfs#2245
Labels
exp/wizard Extensive knowledge (implications, ramifications) required kind/bug A bug in existing code (including security flaws) P1 High: Likely tackled by core team if no one steps up status/ready Ready to be worked

Comments

@alanshaw
Copy link
Member

Observed after running a daemon for a while:

$ ipfs daemon
Initializing IPFS daemon...
js-ipfs version: 0.36.4
System version: x64/darwin
Node.js version: 12.2.0
Swarm listening on /ip4/0.0.0.0/tcp/4001/ipfs/QmNfRMVp4GDBjeGgT2sVmzG19CX4oDhs6oNdWS8R4djcZ7
Swarm listening on /ip6/::/tcp/4001/ipfs/QmNfRMVp4GDBjeGgT2sVmzG19CX4oDhs6oNdWS8R4djcZ7
Swarm listening on /ip4/127.0.0.1/tcp/4003/ws/ipfs/QmNfRMVp4GDBjeGgT2sVmzG19CX4oDhs6oNdWS8R4djcZ7
Swarm listening on /p2p-circuit/ipfs/QmNfRMVp4GDBjeGgT2sVmzG19CX4oDhs6oNdWS8R4djcZ7
Swarm listening on /p2p-circuit/ip4/0.0.0.0/tcp/4001/ipfs/QmNfRMVp4GDBjeGgT2sVmzG19CX4oDhs6oNdWS8R4djcZ7
Swarm listening on /p2p-circuit/ip6/::/tcp/4001/ipfs/QmNfRMVp4GDBjeGgT2sVmzG19CX4oDhs6oNdWS8R4djcZ7
Swarm listening on /p2p-circuit/ip4/127.0.0.1/tcp/4003/ws/ipfs/QmNfRMVp4GDBjeGgT2sVmzG19CX4oDhs6oNdWS8R4djcZ7
API listening on /ip4/127.0.0.1/tcp/5001/http
Gateway (read only) listening on /ip4/127.0.0.1/tcp/8080/http
Web UI available at http://127.0.0.1:5001/webui
Daemon is ready
NodeError: Cannot call write after a stream was destroyed
    at doWrite (/Users/alan/.iim/dists/[email protected]/node_modules/readable-stream/lib/_stream_writable.js:405:38)
    at writeOrBuffer (/Users/alan/.iim/dists/[email protected]/node_modules/readable-stream/lib/_stream_writable.js:394:5)
    at ResponseStream.Writable.write (/Users/alan/.iim/dists/[email protected]/node_modules/readable-stream/lib/_stream_writable.js:303:11)
    at BufferPeekStream.ondata (_stream_readable.js:696:22)
    at BufferPeekStream.emit (events.js:196:13)
    at BufferPeekStream.Readable.read (_stream_readable.js:491:10)
    at flow (_stream_readable.js:960:34)
    at emitReadable_ (_stream_readable.js:555:3)
    at onEofChunk (_stream_readable.js:519:5)
    at readableAddChunk (_stream_readable.js:239:5)
/Users/alan/.iim/dists/[email protected]/node_modules/ipfs/src/cli/bin.js:9
  throw err
  ^

Error: Cannot call write after a stream was destroyed
    at doWrite (/Users/alan/.iim/dists/[email protected]/node_modules/readable-stream/lib/_stream_writable.js:405:38)
    at writeOrBuffer (/Users/alan/.iim/dists/[email protected]/node_modules/readable-stream/lib/_stream_writable.js:394:5)
    at ResponseStream.Writable.write (/Users/alan/.iim/dists/[email protected]/node_modules/readable-stream/lib/_stream_writable.js:303:11)
    at BufferPeekStream.ondata (_stream_readable.js:696:22)
    at BufferPeekStream.emit (events.js:196:13)
    at BufferPeekStream.Readable.read (_stream_readable.js:491:10)
    at flow (_stream_readable.js:960:34)
    at emitReadable_ (_stream_readable.js:555:3)
    at onEofChunk (_stream_readable.js:519:5)
    at readableAddChunk (_stream_readable.js:239:5)
@jacobheun
Copy link
Contributor

Just adding some additional context before it leaves my brain:

This error was encountered heavily while there were numerous nodes spinning up and down on a local network (IPFS Camp). I'm assuming the thrashing of the network exposed a bug where we're not properly closing streams and bubbling that up when connections terminate, since the stack trace indicates we are writing to a dead connection/stream. A lot of nodes were leveraging MDNS while this was going on, but there were also plenty of browser and go-ipfs nodes running as well.

@jacobheun jacobheun added kind/bug A bug in existing code (including security flaws) exp/wizard Extensive knowledge (implications, ramifications) required P1 High: Likely tackled by core team if no one steps up status/ready Ready to be worked labels Jul 2, 2019
@lidel
Copy link
Member

lidel commented Jul 5, 2019

I also got this error today while browsing websites via HTTP Gateway.

BufferPeekStream is present every time Cannot call write after a stream was destroyed occurs, and AFAIK gateway is the only place where we use peek from buffer-peek-stream: js-ipfs:src/http/gateway/resources/gateway.js#L145-L162 (we use it for mime-type sniffing over the first few bytes)

Which begs the question: is there a bug in buffer-peek-stream, or is it just a canary in a coal mine and the bug is elsewhere?

ps. possible duplicate: libp2p/pull-mplex#13

@jacobheun
Copy link
Contributor

It's possible the pipe call isn't properly forwarding errors. Using the streams pipeline method might help mitigate this. If Gateway is the only place we're using BufferPeekStream we should add a test for clients hanging up while we're streaming them a response and see if this causes the failure.

@lidel
Copy link
Member

lidel commented Jul 5, 2019

I found a way to reproduce Error: Cannot call write after a stream was destroyed:

  1. Open a single file, eg. http://127.0.0.1:9090/ipns/tr.wikipedia-on-ipfs.org/I/m/Mars_oppositions_2003-2018.png
  2. Open Console, go to Network tab, disable cache (with cache enabled HTTP 304 responses make it harder to reproduce)
  3. Rapidly refresh page a for a few seconds to ensure one of requests is cancelled by the browser mid-fly (keeping F5 pressed also works)
  4. At some point jsipfs daemon will die with Error: Cannot call write after a stream was destroyed

This happens even if I use pipeline here, it looks like it happens here, inside of hapijs, I will look into this, perhaps hapi is closing response stream while we are still piping to it.

Update: error is gone if I pass ipfs.catReadableStream directly to hapi. Looks like a bug in buffer-peek-stream (it does .pipe internally) or the way we use it.

Update: proposed fix in ipfs/js-ipfs#2227

lidel added a commit to ipfs/js-ipfs that referenced this issue Jul 8, 2019
This removes buffer-peek-stream which caused uncaught errors
inside of Hapijs and replaces it with much simpler approach
to set content-type header in Gateway responses.

Closes libp2p/js-libp2p#374
Closes libp2p/pull-mplex#13

License: MIT
Signed-off-by: Marcin Rataj <[email protected]>
lidel added a commit to ipfs/js-ipfs that referenced this issue Jul 12, 2019
This is an alternative to #2227
that does not hit datastore twice.

The underlying error was caused by multiple PassThrough/pipe calls.
Turns out go-ipfs does not compress anything by default, so we can
remove PassThrough responsible for streaming of compressed payload
and fix the issue while keeping optimization introduced by buffer-peek-stream.

Closes libp2p/js-libp2p#374
Closes libp2p/pull-mplex#13

License: MIT
Signed-off-by: Marcin Rataj <[email protected]>
alanshaw pushed a commit to ipfs/js-ipfs that referenced this issue Jul 12, 2019
This is an alternative to #2227
that does not hit datastore twice.

The underlying error was caused by multiple PassThrough/pipe calls.
Turns out go-ipfs does not compress anything by default, so we can
remove PassThrough responsible for streaming of compressed payload
and fix the issue while keeping optimization introduced by buffer-peek-stream.

Closes libp2p/js-libp2p#374
Closes libp2p/pull-mplex#13

License: MIT
Signed-off-by: Marcin Rataj <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exp/wizard Extensive knowledge (implications, ramifications) required kind/bug A bug in existing code (including security flaws) P1 High: Likely tackled by core team if no one steps up status/ready Ready to be worked
Projects
None yet
3 participants