Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

session: fix grpc message size limits for tar streams #4313

Merged
merged 3 commits into from
Nov 13, 2023

Conversation

tonistiigi
Copy link
Member

When exporting tar stream (with SBOM) it is possible to hit grpc message size limits atm. While we raise the limits to 16MB for grpc control API, we don't currently do the same for the session API.

This can be reproduced with:

FROM laurentgoderre689/tar-too-big
RUN echo "Hello World"
docker buildx b . --platform linux/amd64 -t blah --sbom=true --provenance=true -o type=tar,dest=tar.tar

The first commit makes it so that the tar producer never sends chunks bigger than 3MB in a single message.

The second commit raises the limits on the client side to match control API. Only one of the fixes is needed to fix the issue in the reproducer. The second commit is for consistency and to fix the issue in case a new client accesses the old daemon.

The third commit is for uploadprovider which is used in the case build context is provided as a tar stream. This is unrelated to the reproducer but it looks like the current implementation could theoretically be similarly affected. I'm not sure if there is a way to reproduce this in practice.

These limits were already set for control API requests but
not for session requests.

Signed-off-by: Tonis Tiigi <[email protected]>
Comment on lines +60 to +63
if n2, err = wc.Write(dt); err != nil {
return n1 + n2, err
}
return n1 + n2, nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: could we change this recursion into a loop? If we get an error deep in the stack if we have very large messages, the stack will be pretty unreadable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Realistically, I would never expect to get too big messages in here. The limit is 16MB anyway so anything bigger than 5x would already start to fail on these conditions.

@@ -47,6 +47,22 @@ type streamWriterCloser struct {
}

func (wc *streamWriterCloser) Write(dt []byte) (int, error) {
// grpc-go has a 4MB limit on messages by default. Split large messages
// so we don't get close to that limit.
const maxChunkSize = 3 * 1024 * 1024
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note - this logic feels very similar to what's already implemented in fsutil.Send with a buffer pool. I wonder if (as a follow-up) we could push the logic to send/receive single files into fsutil (which could let us re-use the same limiting logic, so if we want to update the size of the messages we send, we don't have to do it everywhere).

session/grpc.go Show resolved Hide resolved
@tonistiigi tonistiigi merged commit 892fbdf into moby:master Nov 13, 2023
53 of 55 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants