-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize loading ?fferent_edges
.
#298
Conversation
2cad097
to
be4cb91
Compare
be4cb91
to
fd8adc1
Compare
When reading edge ids for a large file we get a 10'000x speedup. The benchmark is:
This would compute the edge IDs used by each MPI rank for analysis purposes. With the optimization it takes about 2-4s without the first 10 rank take 77s (or 8.5h total). |
The PR uses templates to hide the difference between |
The solution only works for |
Reading from parallel filesystems, e.g. GPFS, requires reading few but large chunks. Reading multiple times from the same block/page, come with a hefty performance penalty. The commit implements the functionality for merging nearby reads by adding or modifying: * `sortAndMerge` to allow merging ranges across gaps of a certain size. * `bulkRead` to read block-by-block and extract the requested slices in memory. * `_readSelection` to always combine reads. * `?fferent_edges` to optimize reading of edge IDs. It requires a compile-time constant `SONATA_PAGESIZE` to specify the block/pagesize to be targeted.
fd8adc1
to
d2e189e
Compare
## Context When using `WholeCell` load-balancing, the access pattern when reading parameters during synapse creation is extremely poor and is the main reason why we see long (10+ minutes) periods of severe performance degradation of our parallel filesystem when running slightly larger simulations on BB5. Using Darshan and several PoCs we established that the time required to read these parameters can be reduced by more than 8x and IOps can be reduced by over 1000x when using collective MPI-IO. Moreover, the "waiters" where reduced substantially as well. See BBPBGLIB-1070. Following those finding we concluded that neurodamus would need to use collective MPI-IO in the future. We've implemented most of the required changes directly in libsonata allowing others to benefit from the same optimizations should the need arise. See, BlueBrain/libsonata#309 BlueBrain/libsonata#307 and preparatory work: BlueBrain/libsonata#315 BlueBrain/libsonata#314 BlueBrain/libsonata#298 By instrumenting two simulations (SSCX and reduced MMB) we concluded that neurodamus was almost collective. However, certain attributes where read in different order on different MPI ranks. Maybe due to salting hashes differently on different MPI ranks. ## Scope This PR enables neurodamus to use collective IO for the simulation described above. ## Testing <!-- Please add a new test under `tests`. Consider the following cases: 1. If the change is in an independent component (e.g, a new container type, a parser, etc) a bare unit test should be sufficient. See e.g. `tests/test_coords.py` 2. If you are fixing or adding components supporting a scientific use case, affecting node or synapse creation, etc..., which typically rely on Neuron, tests should set up a simulation using that feature, instantiate neurodamus, **assess the state**, run the simulation and check the results are as expected. See an example at `tests/test_simulation.py#L66` --> We successfully ran the reduced MMB simulation, but since SSCX hasn't been converted to SONATA, we can't run that simulation. ## Review * [x] PR description is complete * [x] Coding style (imports, function length, New functions, classes or files) are good * [ ] Unit/Scientific test added * [ ] Updated Readme, in-code, developer documentation --------- Co-authored-by: Luc Grosheintz <[email protected]>
## Context When using `WholeCell` load-balancing, the access pattern when reading parameters during synapse creation is extremely poor and is the main reason why we see long (10+ minutes) periods of severe performance degradation of our parallel filesystem when running slightly larger simulations on BB5. Using Darshan and several PoCs we established that the time required to read these parameters can be reduced by more than 8x and IOps can be reduced by over 1000x when using collective MPI-IO. Moreover, the "waiters" where reduced substantially as well. See BBPBGLIB-1070. Following those finding we concluded that neurodamus would need to use collective MPI-IO in the future. We've implemented most of the required changes directly in libsonata allowing others to benefit from the same optimizations should the need arise. See, BlueBrain/libsonata#309 BlueBrain/libsonata#307 and preparatory work: BlueBrain/libsonata#315 BlueBrain/libsonata#314 BlueBrain/libsonata#298 By instrumenting two simulations (SSCX and reduced MMB) we concluded that neurodamus was almost collective. However, certain attributes where read in different order on different MPI ranks. Maybe due to salting hashes differently on different MPI ranks. ## Scope This PR enables neurodamus to use collective IO for the simulation described above. ## Testing <!-- Please add a new test under `tests`. Consider the following cases: 1. If the change is in an independent component (e.g, a new container type, a parser, etc) a bare unit test should be sufficient. See e.g. `tests/test_coords.py` 2. If you are fixing or adding components supporting a scientific use case, affecting node or synapse creation, etc..., which typically rely on Neuron, tests should set up a simulation using that feature, instantiate neurodamus, **assess the state**, run the simulation and check the results are as expected. See an example at `tests/test_simulation.py#L66` --> We successfully ran the reduced MMB simulation, but since SSCX hasn't been converted to SONATA, we can't run that simulation. ## Review * [x] PR description is complete * [x] Coding style (imports, function length, New functions, classes or files) are good * [ ] Unit/Scientific test added * [ ] Updated Readme, in-code, developer documentation --------- Co-authored-by: Luc Grosheintz <[email protected]>
Optimized reading of edge IDs by aggregating ranges into larger (GPFS-friendly) ranges before creating the appropriate HDF5 selection to reduce the number of individual reads. Then it filters out any unneeded data in memory. This is very similar to work done in #183.
This PR introduces the following:
libsonata.Selection
.?fferent_edge
in bulk.SONATA_PAGESIZE
which controls how large the merged region need to be.