-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unified allocation throttling #17020
Open
amotin
wants to merge
2
commits into
openzfs:master
Choose a base branch
from
amotin:alloc
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
d883f10
to
ff6e15d
Compare
I am still thinking whether it would make sense to give smaller but fast vdev some smaller write boost at the beginning even on idle pool, so that it could be used more during reads, even by the cost of some lower write speeds later. Suppose it may have no a universal answer. |
f0e3411
to
7432c34
Compare
We don't need ms_selected_time resolution in nanoseconds. And even milliseconds of metaslab_unload_delay_ms are questionable. Reduce it to seconds to avoid gethrtime() calls under a congested lock. Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc.
Existing allocation throttling had a goal to improve write speed by allocating more data to vdevs that are able to write it faster. But in the process it completely broken the original mechanism, designed to balance vdev space usage. With severe vdev space use imbalance it is possible that some with higher use start growing fragmentation sooner than others and after getting full will stop any writes at all. Also after vdev addition it might take a very long time for pool to restore the balance, since the new vdev does not have any real preference, unless the old one is already much slower due to fragmentation. Also the old throttling was request- based, which was unpredictable with block sizes varying from 512B to 16MB, neither it made much sense in case of I/O aggregation, when its 32-100 requests could be aggregated into few, leaving device underutilized, submitting fewer and/or shorter requests, or in opposite try to queue up to 1.6GB of writes per device. This change presents a completely new throttling algorithm. Unlike the request-based old one, this one measures allocation queue in bytes. It makes possible to integrate with the reworked allocation quota (aliquot) mechanism, which is also byte-based. Unlike the original code, balancing the vdevs amounts of free space, this one balances their free/used space fractions. It should result in a lower and more uniform fragmentation in a long run. This algorithm still allows to improve write speed by allocating more data to faster vdevs, but does it in more controllable way. On top of space-based allocation quota, it also calculates minimum queue depth that vdev is allowed to maintain, and respectively the amount of extra allocations it can receive if it appear faster. That amount is based on vdev's capacity and space usage, but also applied only when the pool is busy. This way the code can choose between faster writes when needed and better vdev balance when not, with the choice gradually reducing together with the free space. This change also makes allocation queues per-class, allowing them to throttle independently and in parallel. Allocations that are bounced between classes due to allocation errors will be able to properly throttle in the new class. Allocations that should not be throttled (ZIL, gang, copies) are not, but may still follow the rotor and allocation quota mechanism of the class without disrupting it. Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Status: Code Review Needed
Ready for review and testing
Status: Design Review Needed
Architecture or design is under discussion
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation and Context
Existing allocation throttling had a goal to improve write speed by allocating more data to vdevs that are able to write it faster. But in the process it completely broken the original mechanism, designed to balance vdev space usage. With severe vdev space use imbalance it is possible that some with higher use start growing fragmentation sooner than others and after getting full will stop any writes at all. Also after vdev addition it might take a very long time for pool to restore the balance, since the new vdev does not have any real preference, unless the old one is already much slower due to fragmentation. Also the old throttling was request- based, which was unpredictable with block sizes varying from 512B to 16MB, neither it made much sense in case of I/O aggregation, when its 32-100 requests could be aggregated into few, leaving device underutilized, submitting fewer and/or shorter requests, or in opposite try to queue up to 1.6GB of writes per device.
Description
This change presents a completely new throttling algorithm. Unlike the request-based old one, this one measures allocation queue in bytes. It makes possible to integrate with the reworked allocation quota (aliquot) mechanism, which is also byte-based. Unlike the original code, balancing the vdevs amounts of free space, this one balances their free/used space fractions. It should result in a lower and more uniform fragmentation in a long run.
This algorithm still allows to improve write speed by allocating more data to faster vdevs, but does it in more controllable way. On top of space-based allocation quota, it also calculates minimum queue depth that vdev is allowed to maintain, and respectively the amount of extra allocations it can receive if it appear faster. That amount is based on vdev's capacity and space usage, but also applied only when the pool is busy. This way the code can choose between faster writes when needed and better vdev balance when not, with the choice gradually reducing together with the free space.
This change also makes allocation queues per-class, allowing them to throttle independently and in parallel. Allocations that are bounced between classes due to allocation errors will be able to properly throttle in the new class. Allocations that should not be throttled (ZIL, gang, copies) are not, but may still follow the rotor and allocation quota mechanism of the class without disrupting it.
How Has This Been Tested?
Test 1: 2 SSDs with 128GB and 256GB capacity written at full speed
Up to ~25% of space usage of the smaller one the SSDs are writing at about the same maximum speed. After that smaller device is gradually getting throttled to balance space usage. To the full space usage devices come with only few percent difference. Since users are typically discouraged to run at full capacity to reduce fragmentation, the performance at the beginning is more important than at the end.
Test 2: 2 SSDs with 128GB and 256GB capacity written at slower speed
Since we do not need more speed, the vdevs are keeping almost perfect space usage balance.
Test 3: SSD and HDD vdevs of the same capacity, but very different performance, written at full speed
While empty, the SSD is allowed to write 2.5 times faster than the HDD. With its space usage increase to ~50% the SSD is getting throttled to the HDD speed and after that even slower. To the full space usage devices come with only few percent difference.
Test 4: SSD and HDD vdevs of the same capacity, but very different performance, written at slower speed
Since we do not need more speed, the SSD is throttled to the HDD speed, but instead they are keeping almost perfect space usage balance.
Test 5: Second vdev addition
First the pool of one vdev is filled almost up to capacity. After that second vdev is added and the data is overwritten couple times. Single data overwrite is enough to re-balance the vdevs, even with some overshot, probably due to big sizes of TXG relative to device sizes used in the test and ZFS delayed frees.
Test 6: Parallel sequential write to 12x 5-wide RAIDZ1 of HDDs
Types of changes
Checklist:
Signed-off-by
.