-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipelined extraction #236
Draft
cosmicexplorer
wants to merge
8
commits into
zip-rs:master
Choose a base branch
from
cosmicexplorer:pipelined-extract-v2
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
pipelined extraction #236
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- initial sketch of lexicographic trie for pipelining - move path splitting into a submodule - lex trie can now propagate entry data - outline handle allocation - mostly handle files - mostly handle dirs - clarify symlink FIXMEs - do symlink validation - extract writable dir setting to helper method - modify args to handle allocation method - handle allocation test passes - simplify perms a lot - outline evaluation - handle symlinks - BIGGER CHANGE! add EntryReader/etc - make initial pipelined extract work - fix file perms by writing them after finishing the file write - support directory entries by unix mode as well - impl split extraction - remove dependency on reader refactoring - add dead_code to methods we don't use yet
cosmicexplorer
force-pushed
the
pipelined-extract-v2
branch
4 times, most recently
from
August 21, 2024 04:21
7a45b32
to
5cec332
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Recreation of #208 to work around github issues. Still drafted as it needs to address extant comments from that PR.
Problem
ZipArchive::extract()
corresponds to the way most zip implementations perform the task, but it's single-threaded. This is appropriate under the assumptions imposed by rust'sRead
andSeek
traits, where mutable access is necessary and only one reader can extract file contents at a time, but most unix-like operating systems offer apread()
operation which avoids mutating OS state like the file offset, so multiple threads can read from a file handle at once. The go programming language offersio.ReaderAt
in the stdlib to codify this ability.Solution
This is a rework of #72 which avoids introducing unnecessary thread pools and creates all output file handles and containing directories up front. For large zips, we want to:
src/read/split.rs
was created to coverpread()
and other operations, whilesrc/read/pipelining.rs
was created to perform the high-level logic to split up entries and perform pipelined extraction.Result
parallelism
feature was added to the crate to gate the newly added code + API.libc
crate was added for#[cfg(all(unix, feature = "parallelism"))]
in order to make use of OS-specific functionality.zip::read::split_extract()
was added as a new external API to extract&ZipArchive<fs::File>
when#[cfg(all(unix, feature = "parallelism"))]
.Note that this does not handle symlinks yet, which I plan to add in a followup PR.
CURRENT BENCHMARK STATUS
On a linux host (with
splice()
and optionallycopy_file_range()
), we get about a 6.5x speedup with 12 decompression threads:The performance should keep increasing as we increase thread count, up to the number of available CPU cores (this was running with a parallelism of 12 on my 16-core laptop). This also works on macOS and BSDs, and other
#[cfg(unix)]
platforms.