You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Create 6 files in the volume
Run a data mover backup and set --parallel-files-upload=6 ParallelFilesUpload has been set correctly in DUCR's dataMoverConfig
Find the node where the data mover is running
Find the node-agent pid in the node
run ls -l /proc/<pid>/fd
There are not 6 files being opened in parallel. In most of time, the opened file number is 4, which is the number of CPU cores
The text was updated successfully, but these errors were encountered:
This is related to below Kopia uploader's behavior:
Say the parallel value is set to X, X - 1 files are processed by the worker pool and the left 1 is processed by the current routine which is also traversing the current dir and assigning entries to the other workers
If the current routine meets a large file, traversing will be blocked even though there are free workers
On the other hand, Kopia uploader has another mechanism to handle large files --- file concatenation, that is, if a file is large enough, file will be divided into parts and uploaded concurrently through the same workers.
However, Velero doesn't enable the file concatenation feature. Let's try to enable it in v1.14 so as to gain the best performance.
Create 6 files in the volume
Run a data mover backup and set --parallel-files-upload=6
ParallelFilesUpload
has been set correctly in DUCR'sdataMoverConfig
Find the node where the data mover is running
Find the node-agent pid in the node
run
ls -l /proc/<pid>/fd
There are not 6 files being opened in parallel. In most of time, the opened file number is 4, which is the number of CPU cores
The text was updated successfully, but these errors were encountered: