-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make magma optional #298
base: main
Are you sure you want to change the base?
Make magma optional #298
Conversation
Upstream keeps all magma-related routines in a separate libtorch_cuda_linalg library that is loaded dynamically whenever linalg functions are used. Given the library is relatively small, splitting it makes it possible to provide "magma" and "nomagma" variants that can be alternated between. Fixes conda-forge#275 Co-authored-by: Isuru Fernando <[email protected]>
…nda-forge-pinning 2024.12.04.13.54.14
Try to speed up magma/nomagma builds a bit. Rather than rebuilding the package 3 times (possibly switching magma → nomagma → magma again), build it twice at the very beginning and store the built files for later reuse in subpackage builds. While at it, replace the `pip wheel` calls with `setup.py build` to avoid unnecessarily zipping up and then unpacking the whole thing. In the end, we are only grabbing a handful of files for `libtorch*` packages and they are in predictable location in the build directory. `pip install` remains being used in the final builds for `pytorch`.
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/12202823840. Examine the logs at this URL for more detail. |
Since this is still WIP, I started only a single job on linux for now (x64+MKL+CUDA) |
Thanks. It seems to have failed only because it's adding new outputs. Do you want me to file the admin request for allowing |
That would be great! |
Filed as conda-forge/admin-requests#1209. |
The test currently refused to even start, since not all dependencies were satisfied.
Put all the rules in a single file. In the end, build_common.sh has pytorch-conditional code at the very end anyway, and keeping the code split like this only makes it harder to notice mistakes.
that are independent of selected Python version and are therefore shared | ||
by all Python versions. | ||
|
||
2. `libtorch-cuda-linalg` that provides the shared `libtorch_cuda_linalg.so` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this the preffered one?
as in, is it preffered that users make use of linalg?
I ask because we try very hard to ensure that
mamba install pytorch
installs the best hardware optimized one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand the question.
libtorch_cuda_linalg.so
is required for some operations. It is always pulled in by libtorch
itself, i.e. the end result is that some version of the library is always installed.
As for magma vs. nomagma, I think we ought to prefer magma. #275 (comment) suggests that cusolver will be faster for some workflows, but the magma build supports both and according to https://pytorch.org/docs/stable/backends.html#torch.backends.cuda.preferred_linalg_library, it has a heuristic to choose the faster backend for given operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So who benefits from making this optional?
Sorry if this obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is #298, i.e. people who want to avoid installing the large magma dependency (~250M in package). Given that libtorch-cuda-linalg
is 200k (nomagma) / 245k (magma), I've figured out it's worth it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The no-magma variant reduces the on-disk install size of a pytorch-gpu
install from 7 GB to 5 GB (as #275 says). So this is definitely worth doing.
Given that cusolver
is now on average fairly close in performance to magma
and that the PyTorch wheels don't include magma, I'd argue that the default should be nomagma
(I'd certainly almost always want that for my use cases). However, what the default is is less important than both options being available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i understsand. it is just not "obvious" thats all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few users will touch
Is this based on your experience or facts?
The main driving principle with the pytorch package here has been to provide the same features as in upstream conda/wheels so that users who do use our packages are not driven away by mysterious performance issues. With the deprecation of anaconda.org/pytorch channel, we certainly do not want the users of that channel to look at conda-forge and form the opinion that the pytorch packages in conda-forgeare sub-par. Therefore I do strongly prefer using the same options as the upstream wheels which in this case is with magma.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this based on your experience or facts?
It's based on @rgommers' statement "10 or so functions, vs. [...] all of libtorch". I don't know pytorch nearly as well as Ralf, so I caveated my point with "assuming their percentage [of those who need it] among users is reasonably small".
IMO it's always a numbers game (and often we don't know the numbers, so we stay conservative; which is fine!) - if we knew, say, that <0.1% of users are affected, then burdening 99.9% users with a useless 2GB download sounds unappealing. The question gets progressively harder the higher we estimate the percentage. Probably anything north of 5% of users, we'd default to performance rather than footprint? I don't have a magic bullet. 🤷
provide the same features as in upstream conda/wheels so that users who do use our packages are not driven away by mysterious performance issues.
I agree with the overall direction of course. We still have a few things to catch up (e.g. cpuarch builds, as IIUC pytorch effectively requires something between x86_64 v3 and v4).
That said, it's certainly technically possible to warn users of a potential performance-cliff where we don't yet support certain upstream optimizations (or require an opt-in for whatever reason). It's mainly a question of how much of that we want to patch and/or upstream.
To be clear, I'm also OK with "fat binaries are preferable to performance degradation or extra maintenance effort". The magma-case is just not so black-and-white IMO. What do you think about the idea of making an exception to link libmagma statically?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be fair, they're ~10 pretty important linear algebra functions. The default matching upstream is also a good argument. I actually wish that was done more often, so I can't complain about the argument being applied here:) For example, cuDNN version is very performance-sensitive, so I wish that were matched better rather than just taking whatever the latest version is.
Static linking or some other way to reduce libmagma
in size would be good to investigate then though, because binary size is also a performance (and usability) issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are pretty important linear algebra functions like lu_solve, so I would say vast majority of users would use magma. Not the other way. For eg: a simple script like below
import torch
a = torch.randn(5, 5).cuda()
b = torch.randn(5, 5).cuda()
torch.linalg.solve(a, b)
would invoke magma. The number of functions provided is a poor indicator of its usage. (but certainly valid about code size)
e.g. cpuarch builds, as IIUC pytorch effectively requires something between x86_64 v3 and v4).
pytorch requires those from the compiler, but not at runtime. fbgemm does runtime cpu dispatching just like openblas, numpy, etc.
What do you think about the idea of making an exception to link libmagma statically?
Why do you think that linking libmagma statically will help? Just because upstream used a static build? One option that's different from upstream is conda-forge/libmagma-feedstock#22
Hi! This is the friendly automated conda-forge-linting service. I was trying to look for recipes to lint for you, but it appears we have a merge conflict. Please try to merge or rebase with the base branch to resolve this conflict. Please ping the 'conda-forge/core' team (using the |
I've added explanation in the README, as well as a fix to install |
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( I do have some suggestions for making it better though... For recipe/meta.yaml:
This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/12318937456. Examine the logs at this URL for more detail. |
While technically upstream uses 2024.2.0, this is causing some of the calls to fail with an error: RuntimeError: MKL FFT error: Intel oneMKL DFTI ERROR: Inconsistent configuration parameters Force <2024 that seems to work better. Fixes conda-forge#301
Enable actually running a fixed random subset (1/5) of core tests to check for packaging-related regressions. We are not running the complete test suite because it takes too long to complete.
I've added a mkl pin to fix #301, and added running a subset of tests. If they turn out to take too much time, we can go for a smaller shard. |
Per RuntimeError: Ninja is required to load C++ extensions
Ok, FWICS it adds roughly ~1.5h for a single CI build for osx, so definitely too much (and megabuild will multiply that by 4, I think). I'm going to replace the subset once CI confirms that I've fixed the test failure (so I don't lose it while changing test subset). |
Replaced the tests with a hand-picked subset. Pulled in a few fixes for numpy-2 testing. My next idea on the list is to try making it work with rattler-build. |
Do you have the |
Not offhand, but I should have a warm ccache, so I'll get one quickly. However, I've only built for 7.5 CUDA target, is that okay for you? |
yep |
Here's both "magma" and "nomagma" variants (they're small): linalgs.zip |
While there still doesn't seem to be a clear agreement which builds should be preferred, let's prefer "magma" to keep the current behavior unchanged for end users.
Replace the build number hacks with `track_features` to deprioritize generic BLAS over mkl, and CPU over CUDA. This is mostly intended to simplify stuff before trying to port to rattler-build.
…onda-forge-pinning 2024.12.12.16.25.16
Remove a leftover `skip` that prevented CUDA + generic BLAS build from providing all packages, notably `pytorch`. While at it, remove redundant [win] skip.
Ok, some important stuff in today's update:
That said, I think the latter means that I should set the build number to be higher than the highest current build number used, is that correct? |
It seems like with just |
libtorch-split) | ||
# Call setup.py directly to avoid spending time on unnecessarily | ||
# packing and unpacking the wheel. | ||
$PREFIX/bin/python setup.py build |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling setup.py
directly is deprecated right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends on who you ask. setuptools itself only deprecated setup.py install
so far. Then, the hacks PyTorch do on top of setuptools are all pretty much incompatible with PEP517 (especially sys.argv
manipulations).
This reverts commit 4ad7437.
…onda-forge-pinning 2024.12.13.14.31.10
I've reverted the build number and |
Hmm, I suppose there's no trivial way around that. We could then multiple all |
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)From the commit message:
Also:
Fixes #275
@isuruf helped me a lot with this, particularly with refactoring the builds, so both variants are built in one run.