We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When copying a large path from an s3-based cache, the nix binary appears to buffer the entire download into memory.
When the path is large enough (above approximately 3.5GB) this also reliably causes nix to segfault.
I can only reproduce for an s3-based nix cache. If I use an HTTP cache, memory usage stays low can constant.
cd $(mktemp -d) dd if=/dev/urandom of=./random_4g.bin bs=1M count=4096 path=$(nix store add-path . --name large-random) # copy path to s3 store nix store delete $path # attempt to re-import nix copy --to local --from <the s3 cache> $path # experience segfault 74861 segmentation fault nix copy --to local /nix/store/rv559vmhs7751xizmfnxk5bwyjhfizpa-large-random
Nix does uses fixed amount of memory and does not segfault
nix-env (Nix) 2.22.0
but I have also experienced this with nix 2.25.0
Add 👍 to issues you find important.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the bug
When copying a large path from an s3-based cache, the nix binary appears to buffer the entire download into memory.
When the path is large enough (above approximately 3.5GB) this also reliably causes nix to segfault.
I can only reproduce for an s3-based nix cache. If I use an HTTP cache, memory usage stays low can constant.
Steps To Reproduce
Expected behavior
Nix does uses fixed amount of memory and does not segfault
Metadata
but I have also experienced this with nix 2.25.0
Additional context
Checklist
Add 👍 to issues you find important.
The text was updated successfully, but these errors were encountered: