-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
session: fix grpc message size limits for tar streams #4313
session: fix grpc message size limits for tar streams #4313
Conversation
Signed-off-by: Tonis Tiigi <[email protected]>
These limits were already set for control API requests but not for session requests. Signed-off-by: Tonis Tiigi <[email protected]>
Signed-off-by: Tonis Tiigi <[email protected]>
9ee2bb2
to
463bdab
Compare
if n2, err = wc.Write(dt); err != nil { | ||
return n1 + n2, err | ||
} | ||
return n1 + n2, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: could we change this recursion into a loop? If we get an error deep in the stack if we have very large messages, the stack will be pretty unreadable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Realistically, I would never expect to get too big messages in here. The limit is 16MB anyway so anything bigger than 5x would already start to fail on these conditions.
@@ -47,6 +47,22 @@ type streamWriterCloser struct { | |||
} | |||
|
|||
func (wc *streamWriterCloser) Write(dt []byte) (int, error) { | |||
// grpc-go has a 4MB limit on messages by default. Split large messages | |||
// so we don't get close to that limit. | |||
const maxChunkSize = 3 * 1024 * 1024 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a note - this logic feels very similar to what's already implemented in fsutil.Send
with a buffer pool. I wonder if (as a follow-up) we could push the logic to send/receive single files into fsutil (which could let us re-use the same limiting logic, so if we want to update the size of the messages we send, we don't have to do it everywhere).
When exporting tar stream (with SBOM) it is possible to hit grpc message size limits atm. While we raise the limits to 16MB for grpc control API, we don't currently do the same for the session API.
This can be reproduced with:
The first commit makes it so that the tar producer never sends chunks bigger than 3MB in a single message.
The second commit raises the limits on the client side to match control API. Only one of the fixes is needed to fix the issue in the reproducer. The second commit is for consistency and to fix the issue in case a new client accesses the old daemon.
The third commit is for
uploadprovider
which is used in the case build context is provided as a tar stream. This is unrelated to the reproducer but it looks like the current implementation could theoretically be similarly affected. I'm not sure if there is a way to reproduce this in practice.