Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TLS not properly flushed, causing "large" responses to arrive incompletely #10049

Closed
Pwuts opened this issue Apr 7, 2021 · 7 comments · Fixed by #10146
Closed

TLS not properly flushed, causing "large" responses to arrive incompletely #10049

Pwuts opened this issue Apr 7, 2021 · 7 comments · Fixed by #10146
Assignees
Labels
bug Something isn't working correctly high priority runtime Relates to code in the runtime crate tls Issues related to TLS implementation
Milestone

Comments

@Pwuts
Copy link

Pwuts commented Apr 7, 2021

I've run into an issue with serveTLS where serving "large" files (Nuxt javascript files in this case) fails. The request completes on the server side, or so it seems, but the socket does not close and thus the client (browser or cURL) does not stop loading.

POC:

import { serveTLS } from "https://deno.land/std/http/server.ts";

const options = {
  hostname: "localhost",
  port: 8443,
  certFile: "/tmp/foo.crt",
  keyFile: "/tmp/foo.key",
};

for await (const req of serveTLS(options)) {
  req.respond({ new TextEncoder().encode("A".repeat(2**17)); });  // 128 kiB
}

My browser (chrome) stops after exactly receiving 64kiB, or finishes if the file is smaller than that. Trying to get the same files with cURL only gives me 32kiB.

Possible cause

@Juerd, @supakeen and I figured out it might have something to do with the fact that in writeResponse on line 289, writer.flush() is called, which writes the contents of its buffer to the underlying writer, but does not flush this underlying layer in case it has a buffer of itself, which is the case with TLS.

Is it possible someone missed this particular comment in Tokio TLS, the library Deno uses for its TLS implementation? (kudo's to @Juerd for finding that)

@Pwuts Pwuts changed the title [URGENT] TLS not properly flushed, causing "large" responses to not arrive completely [URGENT] TLS not properly flushed, causing "large" responses to arrive incompletely Apr 7, 2021
@Pwuts Pwuts changed the title [URGENT] TLS not properly flushed, causing "large" responses to arrive incompletely TLS not properly flushed, causing "large" responses to arrive incompletely Apr 7, 2021
@lucacasonato lucacasonato added bug Something isn't working correctly high priority labels Apr 7, 2021
@lucacasonato lucacasonato added this to the 1.9.0 milestone Apr 7, 2021
@lucacasonato lucacasonato added tls Issues related to TLS implementation runtime Relates to code in the runtime crate labels Apr 7, 2021
@piscisaureus
Copy link
Member

Possibly related: #9692

@Pwuts
Copy link
Author

Pwuts commented Apr 8, 2021

Yep, sounds like it's the same issue or at least very much related. However, I experienced problems with the latest version of deno (1.8.3) on CentOS 8, didn't test on Windows or in WSL.

@piscisaureus
Copy link
Member

piscisaureus commented Apr 8, 2021

I think the problem lies with deno_std's HTTP server implementation and not with our implementation of TLS.
The following test implements a "fake" HTTPS server that sends 128kb responses; when I hit it with curl, I always get the full response.

const listener = Deno.listenTls({
  hostname: "localhost",
  port: 8443,
  certFile: "./cert.pem",
  keyFile: "./key.pem",
});

const EOL = "\r\n";
let n = 0;

for await (const conn of listener) {
  const char = "a".charCodeAt(0) + (n++ % 26);
  const buf = new Uint8Array(2 ** 17);
  buf.fill(char);

  let head = new TextEncoder().encode(
    [`HTTP/1.1 200 OK`, `Content-Length: ${buf.length}`].join(EOL) + EOL + EOL
  );

  await conn.write(head);
  await conn.write(buf);
}

@timonson
Copy link
Contributor

timonson commented Apr 8, 2021

This seems likely to me because the problem didn't happen in deno version 1.7 and previous versions.
@piscisaureus please also consider what @kt3k was able to figure out here: denoland/std#750

@Pwuts
Copy link
Author

Pwuts commented Apr 8, 2021

@piscisaureus running that mock server on CentOS 8, deno 1.8.3, and trying to fetch the content with curl, the request hangs at 75% after receiving 98304 bytes. My browser is able to fetch the whole thing.
I'll leave my test server up at https://test.facebooklekcheck.nl (please excuse the FQDN :) so you can confirm.

Edit: I just noticed that when I hit F5 two times in quick succession, the request hangs in the browser as well. It feels like this happens when I refresh before the page finishes loading, but this may be optical deception.
Edit 2: Apparently that test server won't stay up; after a while it exits with the following error(s):

Illegal SNI hostname received [54, 53, 46, 50, 49, 46, 49, 48, 55, 46, 49, 52]
Sending fatal alert DecodeError
error: Uncaught (in promise) InvalidData: received corrupt message of type Handshake
    at handleError (deno:core/core.js:186:12)
    at jsonOpParseResult (deno:core/core.js:356:9)
    at asyncHandle (deno:core/core.js:223:40)
    at Array.asyncHandlers.<computed> (deno:core/core.js:238:9)
    at handleAsyncMsgFromRust (deno:core/core.js:207:32)

@lucacasonato lucacasonato modified the milestones: 1.9.0, 1.10.0 Apr 23, 2021
piscisaureus added a commit to piscisaureus/deno that referenced this issue May 10, 2021
…ite half"

using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.

Fixes: denoland#9692
Fixes: denoland#10049
Fixes: denoland#10296
Fixes: denoland/std#750
piscisaureus added a commit to piscisaureus/deno that referenced this issue May 10, 2021
denoland#10146)

In denoland#9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.

Fixes: denoland#9692
Fixes: denoland#10049
Fixes: denoland#10296
Fixes: denoland/std#750
@ry
Copy link
Member

ry commented May 11, 2021

Reopening. Fix revert in #10595. False alarm

@ry ry reopened this May 11, 2021
@Pwuts
Copy link
Author

Pwuts commented May 11, 2021

I have no idea what just happened: the merge, then revert, reopening this issue but actually no? Has the issue been fixed in the end? :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working correctly high priority runtime Relates to code in the runtime crate tls Issues related to TLS implementation
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants