You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I'm currently investigating out-of-memory errors when executing RAIL with the lephare estimator and I've noticed that during an estimation, even though the input file is being processed in chunks of 10000 objects, memory grows when starting each new chunk (up to around 130000 objects for the complete file). Since changing the estimator algorithm to others (flexzboost, bpz, etc.) makes the problem disappear, I'm posting this bug report proposal for rail_lephare. A screenshot of the more or less simultaneous estimation with lephare in a group of machines shows the memory usage growing during the execution.
I'm currently trying to investigate the individual allocations during the execution.
I have described the situation in which the bug arose, including what code was executed, information about my environment, and any applicable data others will need to reproduce the problem.
I have included available evidence of the unexpected behavior (including error messages, screenshots, and/or plots) as well as a descriprion of what I expected instead.
If I have a solution in mind, I have provided an explanation and/or pseudocode and/or task list.
The text was updated successfully, but these errors were encountered:
A good representation of the memory increase generated by a memory profiler. This is a single execution of the estimation algorithm with a single file. The 12 steps in the picture represent the 12 chunks of objects being processed.
The problem is that the memory grows on each step, compared with the previous one.
Hello, I'm currently investigating out-of-memory errors when executing RAIL with the lephare estimator and I've noticed that during an estimation, even though the input file is being processed in chunks of 10000 objects, memory grows when starting each new chunk (up to around 130000 objects for the complete file). Since changing the estimator algorithm to others (flexzboost, bpz, etc.) makes the problem disappear, I'm posting this bug report proposal for rail_lephare. A screenshot of the more or less simultaneous estimation with lephare in a group of machines shows the memory usage growing during the execution.
I'm currently trying to investigate the individual allocations during the execution.
The text was updated successfully, but these errors were encountered: