-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make Magma optional for cuda builds? #275
Comments
If you build with magma, you need it at runtime even if you don't use magma. |
Ok, I have some good news. The use of Magma is entirely limited to I think we can split it into a separate subpackage, and build a Magma and non-Magma variants to choose from. On the minus side, we probably can't avoid building most of libtorch twice — though it could be possible to use ccache to minimize the cost of doing that. |
Isn't it used by |
No, it's dynamically loaded:
|
Upstream keeps all magma-related routines in a separate libtorch_cuda_linalg library that is loaded dynamically whenever linalg functions are used. Given the library is relatively small, splitting it makes it possible to provide "magma" and "nomagma" variants that can be alternated between. Fixes conda-forge#275 Co-authored-by: Isuru Fernando <[email protected]>
Comment:
A conda-forge environment with nothing but pytorch for cuda in it currently ways in at 7.2GB, which is perceived to be rather on the heavy side of things.
Looking at potential for slimming things down, libmagma at ~2GB looks like a good candidate.
The pytorch docs seem to suggest that libmagma is used as an alternative for cusolver, which is included anyway, at a much more modest 150 MB.
In the past, magma was significantly faster than cusolver, as demonstrated by [1]. However, a recent 2024 paper by the magma authors [2] shows that cusolver has made progress and is now faster for some of the most important problems.
Magma still offers significant performance benefits for certain workloads, but given that pytorch has the ability to switch between the available libraries, we could make magma an optional dependency, i.e. merely include it in
run_constrained
and leave it up to the user to choose space or performance optimization based on their use-case.Is this feasible or am I missing something about the use of magma in pytorch?
Do you think this is desirable?
[1] S. Abdelfattah, A. Haidar, S. Tomov and J. Dongarra, "Analysis and Design Techniques towards High-Performance and Energy-Efficient Dense Linear Solvers on GPUs," in IEEE Transactions on Parallel and Distributed Systems, vol. 29, no. 12, pp. 2700-2712, 1 Dec. 2018, doi: 10.1109/TPDS.2018.2842785. keywords: {Graphics processing units;Energy efficiency;Task analysis;Multicore processing;Dense linear solvers;GPU computing;energy efficiency},
[2] Abdelfattah A, Beams N, Carson R, et al. MAGMA: Enabling exascale performance with accelerated BLAS and LAPACK for diverse GPU architectures. The International Journal of High Performance Computing Applications. 2024;38(5):468-490. doi:10.1177/10943420241261960
The text was updated successfully, but these errors were encountered: