-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AMD Support #104
Comments
Please refer to https://github.com/lm-sys/FastChat#vicuna-weights and https://github.com/lm-sys/FastChat#serving. |
you can attempt to use rocm on linux but it's far from a smooth experience |
I tried it out on my RX 7900 XTX and it loaded the whole Vicuna 13B model in 8bit mode into VRAM - but segfaulted after loading the checkpoint shards. (I'm guessing since the card isn't officially supported by ROCm yet 😅: ROCm/ROCm#1973) I also setup the ROCm + Vicuna development environment using a Nix flake, but there are a few more tweaks I want to make before publishing it (eg. writing Nix packages for accelerate and gradio). |
🎉 I managed to this running on my RX 7900 XTX!! I just tracked down all the development commits that added gfx11 support to ROCm built it all from source. I pushed my Nix flake here if anyone wants to try it out themself https://github.com/kira-bruneau/FastChat/commit/75235dac0365e11157dbd950bc1a4cf528f8ddc6. (I have it hard-coded to target Steps:
nix develop github:kira-bruneau/FastChat/gfx1100
|
I can confirm fastchat with vicuna 13b model runs fine with 8bit mode on a single AMD 6800 card. SYSTEM: Ubuntu 20 LTS, installed rocm 5.4.2, then pytorch with rocm 5.4.2 support. No need to build from source, works directly with all official packages. |
Oh yep sorry, for all other supported AMD cards you shouldn't need to build from source. I only had to because the RX 7900 XTX isn't supported in the release builds of ROCm yet. |
@kira-bruneau Is it still necessary for rx570(gfx803) to build from source? |
@aseok Oh nope! It was only necessary before the 5.5 ROCm release to support gfx1100. Although... there are still some problems in nixpkgs that means there are parts that you still have to compile from source if you want to use the flake: (see NixOS/nixpkgs#230881) - right now the builder fails to cache rocfft, so you'd have to compile pytorch from source still 😞. If you want to avoid building from source completely, I'd recommend using the official pytorch releases: https://pytorch.org, or try to find a docker image setup for it (which would be a little bit more involved). Hopefully the fixes will get upstreamed soon though! |
(@kira-bruneau) Can someone create instructions to install + run with ROCm please? It seems that there is no flag for it to run in ROCm mode. |
Perhaps there should be a note in the README about AMD compatibility? I successfully reproduced @Gaolaboratory's results. I managed to run Vicuna 13B on my RX 6800 XT with the @onyasumi
If PyTorch isn't installed, running |
@JonLiuFYI could you contribute a pull request to add some notes about AMD? |
@JonLiuFYI Thank you, this seemed to work for me. I can add a PR later to document this in the README |
@JonLiuFYI @onyasumi please go ahead. Thanks! |
Does this work on AMD cards? What are the GPU requirements for inference?
The text was updated successfully, but these errors were encountered: