Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response header overflow leads to buffer corruptions #4936

Closed
nathankooij opened this issue Jun 3, 2020 · 11 comments · Fixed by #4937
Closed

Response header overflow leads to buffer corruptions #4936

nathankooij opened this issue Jun 3, 2020 · 11 comments · Fixed by #4937
Assignees
Labels
Bug For general bugs on Jetty side High Priority

Comments

@nathankooij
Copy link

nathankooij commented Jun 3, 2020

Jetty version
9.4.29 (but traced it back to 9.4.27)

Java version

openjdk 11.0.7 2020-04-14
OpenJDK Runtime Environment 18.9 (build 11.0.7+10)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.7+10, mixed mode)

OS type/version
Debian GNU/Linux 10 (buster)

Description
We run several Spring Boot 2 applications, originally deployed as WAR files on a standalone Jetty server (9.4.15). Recently we changed these to use an embedded Jetty server, which included an upgrade to Jetty 9.4.28. Subsequently, we intermittently observed errors in one of our applications, serving an admin page using webjars. The logs indicated errors such as:

  • org.eclipse.jetty.server.HttpChannel () : /path-to-a-webjar org.springframework.security.web.firewall.RequestRejectedException: The request was rejected because the HTTP method "HTGET" was not included within the whitelist [HEAD, DELETE, POST, GET, OPTIONS, PATCH, PUT]
    This was a valid request from the client side, but somehow the HTTP method got corrupted.

  • o.e.j.u.thread.strategy.EatWhatYouKill (): java.lang.IllegalArgumentException: newPosition > limit: (212 > 0)
    Truncated this one, see below for an example of a stacktrace.

  • org.eclipse.jetty.http.BadMessageException: 500: Response header too large

  • o.e.j.u.thread.strategy.EatWhatYouKill () : java.nio.BufferOverflowException: null

The admin page itself would also become highly unresponsive, and some files would fail to be served. A restart of the application would fix the errors, but it would also resolve itself "eventually". Moreover, the errors' frequency grew with load, but we couldn't reproduce it consistently. This application in particular was affected the most, but we also observed the errors in other applications.

We managed to reproduce the issue by creating an endpoint in which we would explicitly send a response header larger than the configured maximum size (8KB), after which an application would immediately become unresponsive and all aforementioned errors would start appearing. We confirmed this behavior wasn't present in our previous Jetty version 9.4.15, and found it was first introduced in 9.4.27. (Perhaps related to #4541?)

This might also be the same issue as observed in #4828, but 9.4.29 which included the potential fix didn't resolve the issue for us.

We have for now resolved this internally by allowing bigger response headers, as we do have a use case for these internally.

I have provided below a test that can be used to trigger the behavior. It consists of sending a (too) large response header, after which we concurrently trigger many requests in order to reproduce the behavior. From my testing concurrency is necessary, which could explain why the error was more frequently observed in our webpage (it serves many webjars separately rather than bundling).

Lastly, from my brief foray into exploring this issue, in one of the cases I noticed that when the header overflow was triggered, which released the HttpConnection#_header buffer (but before any error handling was done), this same buffer was used by HttpParser#quickStart, but its limit was set to 0. These are two different requests, and could explain the HTTP method corruption for instance. So it seems that somehow the buffer cleanup in case of a header overflow is perhaps not correctly handled.

If you need any more information, please do let me know.

Example test
Please look at the logs to observe the triggered errors. Further below I have also provided some sample stacktraces that can be reproduced by this test.

    @Test
    public void testBufferCorruption() throws Exception
    {
        Server server = new Server();

        HttpConfiguration config = new HttpConfiguration();
        HttpConnectionFactory http = new HttpConnectionFactory(config);

        LocalConnector connector = new LocalConnector(server, http, null);
        connector.setIdleTimeout(5000);
        server.addConnector(connector);
        ErrorHandler eh = new ErrorHandler();
        eh.setServer(server);
        server.addBean(eh);

        byte[] bytes = new byte[8 * 1024];
        Arrays.fill(bytes, (byte)'X');
        final String longstr = "thisisastringthatshouldreachover8kbytes-" + new String(bytes, StandardCharsets.ISO_8859_1) + "_Z_";
        server.setHandler(new AbstractHandler()
        {
            @SuppressWarnings("unused")
            @Override
            public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
            {
                baseRequest.setHandled(true);
                response.setHeader(HttpHeader.CONTENT_TYPE.toString(), MimeTypes.Type.TEXT_HTML.toString());
                response.setHeader("LongStr", longstr);
                PrintWriter writer = response.getWriter();
                writer.write("<html><h1>FOO</h1></html>");
                writer.flush();
                response.flushBuffer();
            }
        });
        server.start();

        ExecutorService executorService = Executors.newFixedThreadPool(8);

        for (int i = 0; i < 500; ++i) {
            executorService.submit(() ->
                    connector.getResponse("GET / HTTP/1.1\r\n" +
                            "Host: localhost\r\n" +
                            "\r\n"));
        }

        executorService.awaitTermination(5, TimeUnit.SECONDS);
    }

Example stacktraces

2020-06-03 15:48:13.693:WARN:oejut.QueuedThreadPool:qtp954702563-33: 
java.nio.BufferOverflowException
	at java.base/java.nio.HeapByteBuffer.put(HeapByteBuffer.java:216)
	at org.eclipse.jetty.util.BufferUtil.put(BufferUtil.java:399)
	at org.eclipse.jetty.util.BufferUtil.append(BufferUtil.java:484)
	at org.eclipse.jetty.io.ByteArrayEndPoint.fill(ByteArrayEndPoint.java:398)
	at org.eclipse.jetty.server.HttpConnection.fillRequestBuffer(HttpConnection.java:336)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:254)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
	at org.eclipse.jetty.io.ByteArrayEndPoint$1.run(ByteArrayEndPoint.java:76)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
	at java.base/java.lang.Thread.run(Thread.java:834)
	java.lang.ArrayIndexOutOfBoundsException: arraycopy: last destination index 8194 out of bounds for byte[8192]
	at java.base/java.lang.System.arraycopy(Native Method)
	at java.base/java.nio.HeapByteBuffer.put(HeapByteBuffer.java:234)
	at org.eclipse.jetty.util.BufferUtil.put(BufferUtil.java:392)
	at org.eclipse.jetty.util.BufferUtil.append(BufferUtil.java:484)
	at org.eclipse.jetty.io.ByteArrayEndPoint.fill(ByteArrayEndPoint.java:398)
	at org.eclipse.jetty.server.HttpConnection.fillRequestBuffer(HttpConnection.java:336)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:254)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
	at org.eclipse.jetty.io.ByteArrayEndPoint$1.run(ByteArrayEndPoint.java:76)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
	at java.base/java.lang.Thread.run(Thread.java:834)
	java.lang.IllegalArgumentException: newPosition > limit: (5 > 0)
	at java.base/java.nio.Buffer.createPositionException(Buffer.java:318)
	at java.base/java.nio.Buffer.position(Buffer.java:293)
	at java.base/java.nio.ByteBuffer.position(ByteBuffer.java:1086)
	at java.base/java.nio.ByteBuffer.position(ByteBuffer.java:262)
	at org.eclipse.jetty.util.BufferUtil.flipToFlush(BufferUtil.java:219)
	at org.eclipse.jetty.http.HttpGenerator.generateResponse(HttpGenerator.java:459)
	at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:743)
	at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
	at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:223)
	at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:549)
	at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:833)
	at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:851)
	at org.eclipse.jetty.server.HttpChannel.onBadMessage(HttpChannel.java:784)
	at org.eclipse.jetty.server.HttpChannelOverHttp.badMessage(HttpChannelOverHttp.java:283)
	at org.eclipse.jetty.http.HttpParser.badMessage(HttpParser.java:1628)
	at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1610)
	at org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:364)
	at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:261)
	at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
	at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
	at org.eclipse.jetty.io.ByteArrayEndPoint$1.run(ByteArrayEndPoint.java:76)
	at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
	at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
	at java.base/java.lang.Thread.run(Thread.java:834)
@nathankooij nathankooij changed the title Response header overflow leads to buffer corruptions in embedded Jetty Response header overflow leads to buffer corruptions in embedded Jetty 9.4.29 Jun 3, 2020
joakime added a commit that referenced this issue Jun 3, 2020
joakime added a commit that referenced this issue Jun 3, 2020
@joakime joakime added Bug For general bugs on Jetty side High Priority labels Jun 3, 2020
@joakime
Copy link
Contributor

joakime commented Jun 3, 2020

We've been able to replicate with a variant of your test case. (It proved very useful! Thanks!)

We have a general fix working currently, but are reviewing other buffer pool usages for similar issues.

joakime added a commit that referenced this issue Jun 3, 2020
gregw added a commit that referenced this issue Jun 3, 2020
If the response buffer is too large, the header buffer was released
but not nulled, then an exception thrown, which again released the
not nulled buffer.  The buffer thus ends up in the buffer pool twice!

Signed-off-by: Greg Wilkins <[email protected]>
@gregw gregw linked a pull request Jun 3, 2020 that will close this issue
@gregw
Copy link
Contributor

gregw commented Jun 3, 2020

Thanks for the detailed bug report, test case and initial analysis.
You were right and it was the change from #4541. The issue was in the case of a buffer overflow, we released the header buffer, but did not null the field. We then threw an exception and the onCompleteFailure handling called release on the class, which again released the buffer (since it was not null). Thus the buffer was put into the pool twice and could be taken out by 2 threads and worked on at the same time. The server would have been unstable from that point on!

Your good report has enabled a quick fix and we'll get a release out asap!

@nathankooij
Copy link
Author

Thanks for the quick turnaround and clear explanation! Looking forward to the release.

gregw added a commit that referenced this issue Jun 3, 2020
removed old comment

Signed-off-by: Greg Wilkins <[email protected]>
gregw added a commit that referenced this issue Jun 3, 2020
* Issue #4936 - Adding LargeHeaderTest to replicate issue

Signed-off-by: Joakim Erdfelt <[email protected]>

* Issue #4936 - Updating LargeHeaderTest to use ServerConnector

Signed-off-by: Joakim Erdfelt <[email protected]>

* Issue #4936 - Fail LargeHeaderTest if client detects issues.

Signed-off-by: Joakim Erdfelt <[email protected]>

* Issue #4936 large response header buffer corruption

If the response buffer is too large, the header buffer was released
but not nulled, then an exception thrown, which again released the
not nulled buffer.  The buffer thus ends up in the buffer pool twice!

Signed-off-by: Greg Wilkins <[email protected]>

* Issue #4936 large response header buffer corruption

removed old comment

Signed-off-by: Greg Wilkins <[email protected]>

Co-authored-by: Joakim Erdfelt <[email protected]>
@joakime
Copy link
Contributor

joakime commented Jun 3, 2020

Merged into jetty-9.4.x branch (and merged up into jetty-10.0.x and jetty-11.0.x as well).

@joakime joakime closed this as completed Jun 3, 2020
@joakime joakime changed the title Response header overflow leads to buffer corruptions in embedded Jetty 9.4.29 Response header overflow leads to buffer corruptions Jun 10, 2020
@msatam
Copy link

msatam commented Jul 4, 2020

We have identified that the bug mentioned here actually has security implications as well. It turned out that corrupt HTTP response buffer unfortunately is sent back in a lot of cases to different clients when the server is under heavy load and ends up sending incorrect responses to different clients. We had a few cases, where due to this particular flaw, clients were able to jump sessions thereby having cross account access. This happens since sometimes the buffer is not completely corrupted and ends up there by sending authentication cookies from one user's response to another user thereby allowing user A to jump in user B's session. We were running version 9.4.28 when this started occuring. As a result we have rolled back to an earlier version where we did not see the bug. I think opening up a CVE for this bug indicating the potential for security implications might be a good idea..

@gregw
Copy link
Contributor

gregw commented Jul 6, 2020

Good idea.

@gregw gregw reopened this Jul 6, 2020
lucamilanesio pushed a commit to GerritCodeReview/gerrit that referenced this issue Jul 21, 2020
This upgrade fixed critical error: [1], that was a follow-up of the
previous fix: [2].

Response header overflow leads to buffer corruptions
Jetty server always allocates maximum response header size

[1] jetty/jetty.project#4936
[2] jetty/jetty.project#4541

Bug: Issue 12846
Change-Id: Ibc47c8b332f433afb6fdee8e78452380996c1dcb
@WalkerWatch
Copy link
Contributor

A CVE was filed for this issue, CVE-2019-17638, and the issue was resolved in 9.4.30.v20200611.

@hossman
Copy link

hossman commented Sep 25, 2020

Why does the linked CVE indicate that "in case of too large response headers, Jetty throws an exception to produce an HTTP 431 error." ?

  • as far as i can tell from manual testing and a casual reviewing of the code/test modified in this issue, "Response header too large" always results in a 500 error
    • throw new BadMessageException(INTERNAL_SERVER_ERROR_500, "Response header too large");
    • assertThat(response.getStatus(), is(500));
  • if the application generates response headers that are too big, 431 wouldn't be appropriate - it's only for usewhen request headers are too large

@gregw
Copy link
Contributor

gregw commented Sep 26, 2020

@hossman, you are correct. We return a 500 and did so before the CVE and fix as well.
@WalkerWatch can you ask for the text of the CVE to be updated?

ShashwatArghode added a commit to ShashwatArghode/airlift that referenced this issue Nov 17, 2020
As the  max request header size and response heder size are configurable in airlift, CVE-2019-17638 (https://nvd.nist.gov/vuln/detail/CVE-2019-17638) might affect it.
Jetty version update to 9.4.30.v20200611 will fix the issue in lights of CVE-2019-17638 and jetty/jetty.project#4936.
findepi pushed a commit to airlift/airlift that referenced this issue Nov 19, 2020
As the  max request header size and response heder size are configurable in airlift, CVE-2019-17638 (https://nvd.nist.gov/vuln/detail/CVE-2019-17638) might affect it.
Jetty version update to 9.4.30.v20200611 will fix the issue in lights of CVE-2019-17638 and jetty/jetty.project#4936.
ShashwatArghode added a commit to ShashwatArghode/airlift that referenced this issue Aug 5, 2021
As the  max request header size and response heder size are configurable in airlift, CVE-2019-17638 (https://nvd.nist.gov/vuln/detail/CVE-2019-17638) might affect it.
Jetty version update to 9.4.30.v20200611 will fix the issue in lights of CVE-2019-17638 and jetty/jetty.project#4936.
ShashwatArghode added a commit to ShashwatArghode/airlift that referenced this issue Aug 11, 2021
As the  max request header size and response heder size are configurable in airlift, CVE-2019-17638 (https://nvd.nist.gov/vuln/detail/CVE-2019-17638) might affect it.
Jetty version update to 9.4.30.v20200611 will fix the issue in lights of CVE-2019-17638 and jetty/jetty.project#4936.
ShashwatArghode added a commit to ShashwatArghode/airlift that referenced this issue Aug 12, 2021
As the  max request header size and response heder size are configurable in airlift, CVE-2019-17638 (https://nvd.nist.gov/vuln/detail/CVE-2019-17638) might affect it.
Jetty version update to 9.4.30.v20200611 will fix the issue in lights of CVE-2019-17638 and jetty/jetty.project#4936.
@ritesh3d
Copy link

@gregw @joakime @sbordet I have recently facing similar issue, when multiple request are triggered at once then we see the following issue but request individually process just fine.

org.eclipse.jetty.http.BadMessageException: 500: Response header too large
at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:762)
at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:223)
at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:544)
at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:910)
at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:987)
at org.eclipse.jetty.server.HttpOutput.channelWrite(HttpOutput.java:285)
at org.eclipse.jetty.server.HttpOutput.channelWrite(HttpOutput.java:269)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:896)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:303)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:307)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:153)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:287)
at java.io.BufferedWriter.flush(BufferedWriter.java:265)
at com.approuter.module.http.protocol.HttpTransportResponder.sendResponse(HttpTransportResponder.java:176)
at com.approuter.module.http.activity.HttpSendReply.execute(HttpSendReply.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.approuter.maestro.sdk.mpi.DynamicExecutableActivity.execute(DynamicExecutableActivity.java:368)
at com.approuter.maestro.activities.Invoke.call(Invoke.java:216)
at sun.reflect.GeneratedMethodAccessor626.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.approuter.maestro.activities.Instruction.call(Instruction.java:45)
at com.approuter.maestro.vm.Program.call(Program.java:599)
at com.approuter.maestro.vm.Task.run(Task.java:701)
at com.approuter.maestro.vm.Task.run(Task.java:639)
at com.approuter.maestro.vm.Program$RunnableWrapper.run(Program.java:2222)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
at java.util.concurrent.FutureTask.run(FutureTask.java:277)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:191)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:812)
2021-08-13 17:47:11.483 SEVERE [T-9678] [E-BC6C37449E06E08AA4B4577A118E46F6] [job:96743252172AF14764BFBDC6FC1E4AB7] [helper.HttpLogHelper] An unknown exception occurred while sending the response to the client. Exception: 500: Response header too large
java.io.IOException: 500: Response header too large
at com.approuter.module.http.protocol.HttpTransportResponder.sendResponse(HttpTransportResponder.java:228)
at com.approuter.module.http.activity.HttpSendReply.execute(HttpSendReply.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:90)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.approuter.maestro.sdk.mpi.DynamicExecutableActivity.execute(DynamicExecutableActivity.java:368)
at com.approuter.maestro.activities.Invoke.call(Invoke.java:216)
at sun.reflect.GeneratedMethodAccessor626.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:508)
at com.approuter.maestro.activities.Instruction.call(Instruction.java:45)
at com.approuter.maestro.vm.Program.call(Program.java:599)
at com.approuter.maestro.vm.Task.run(Task.java:701)
at com.approuter.maestro.vm.Task.run(Task.java:639)
at com.approuter.maestro.vm.Program$RunnableWrapper.run(Program.java:2222)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:522)
at java.util.concurrent.FutureTask.run(FutureTask.java:277)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:191)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:812)
Caused by: org.eclipse.jetty.http.BadMessageException: 500: Response header too large
at org.eclipse.jetty.server.HttpConnection$SendCallback.process(HttpConnection.java:762)
at org.eclipse.jetty.util.IteratingCallback.processing(IteratingCallback.java:241)
at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:223)
at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:544)
at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:910)
at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:987)
at org.eclipse.jetty.server.HttpOutput.channelWrite(HttpOutput.java:285)
at org.eclipse.jetty.server.HttpOutput.channelWrite(HttpOutput.java:269)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:896)
at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:233)
at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:303)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:307)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:153)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:287)
at java.io.BufferedWriter.flush(BufferedWriter.java:265)
at com.approuter.module.http.protocol.HttpTransportResponder.sendResponse(HttpTransportResponder.java:176)
... 22 more

I see similar issue was fixed in 9.4.30 version , currently we are on 9.4.38.v20210224. I haven't specified any value for response buffer size and the header in the request doesn't have any information , all the data is passed as body content.

Can you please advice.

@joakime
Copy link
Contributor

joakime commented Aug 17, 2021

@ritesh3d then you likely have an honest scenario of response headers that are too big. pointing to a possible bug in your code.

Setup a debug break-point here ...

https://github.com/eclipse/jetty.project/blob/jetty-9.4.38.v20210224/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java#L759-L766

https://github.com/eclipse/jetty.project/blob/288f3cc74549e8a913bf363250b0744f2695b8e6/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java#L759-L766

And look at what is currently in the _header ByteBuffer, and the values in the _info (like the _fields themselves).
It should become apparent what is causing your specific issue.

aweisberg pushed a commit to prestodb/airlift that referenced this issue Oct 4, 2021
As the  max request header size and response heder size are configurable in airlift, CVE-2019-17638 (https://nvd.nist.gov/vuln/detail/CVE-2019-17638) might affect it.
Jetty version update to 9.4.30.v20200611 will fix the issue in lights of CVE-2019-17638 and jetty/jetty.project#4936.
aweisberg pushed a commit to prestodb/airlift that referenced this issue Oct 23, 2021
As the  max request header size and response heder size are configurable in airlift, CVE-2019-17638 (https://nvd.nist.gov/vuln/detail/CVE-2019-17638) might affect it.
Jetty version update to 9.4.30.v20200611 will fix the issue in lights of CVE-2019-17638 and jetty/jetty.project#4936.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug For general bugs on Jetty side High Priority
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants