-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: FP6 #4515
Comments
@nivibilla thanks for sharing the separated kernel implementation - this makes it a lot more straightforward to understand and I would be interested in implementing this within vLLM. I just want to share that they did compare against fine-grained and coarse-grained W4A16 INT4 kernels, which we already have very good ones implemented, and saw performance slightly below them as expected. So while I don't think this will offer particularly new capability in vLLM, it would be very nice to get a relatively accurate 6-bit model compression with just a runtime flag. |
Thanks @mgoin , yes the performance isn't as good as INT4. However the model performance is nearly indistinguishable from fp16 which is really nice. I hope that fp6 becomes the new fp8 standard. There's no need to run the weights in any higher precision. I think it's a nice tradeoff from int4, better performance and slightly slower. And yes the weight loading may be an issue. I hope to have two replicas of a model load inside the same GPU so if one of them takes up the entire GPU memory when loading then it will fail. Hope this can be fixed too. |
cc @comaniac |
Thanks for the request. We can definitely integrate FP6 quantization to vLLM as another supported quantization method to run FP6 models. It shouldn't be too hard given that it only quantized linear layers, and the FP6 linear kernels are open source. Meanwhile, I'd still keep FP8 as the standard (actually the term "standard" is not really important because FP8 is also implemented as a quantization method and it's up to users). The reason is that FP8 is officially supported by GPU vendors (both NVIDIA and AMD) at instruction level, meaning that 1) the vendors will maintain compatibility and performance in future GPU releases, and 2) more workloads (e.g., FP flash attention and kv-cache) can be covered. |
It seems support for this will land in #4652 as |
Not really. The PR you pointed out only uses FP6/8 checkpoints. The compute is still in FP16. |
@comaniac FP6_LLM is weight-only quantization i.e. W6A16, you can see this in the graph I shared in my comment above. There is no compute savings with this method compared to FP16. Also the PR I pointed to allows quantizing at runtime, like our FP8 quantization, not just loading pre-quantized checkpoints. |
Thanks for the clarification. Then we can close this issue I suppose? |
@mgoin I'm a bit confused, why does fp6 not save vram? Even if the activations are in fp16. Surely the weights being in fp6 save memory right? |
Installed from source and it works as expected, amazing!
Loaded model weights are reduced as well. |
Hi @nivibilla you just misunderstood what I said. I said there is no compute savings, meaning the computation still all happens at fp16 precision. This does not imply there is no memory savings, which is very much happening. |
@mgoin ohhh I see. Lol mb |
@twaka |
I think increase of latency is expected unless integration of fo6_llm's kernel since dequantize and matmul are not fused in deepspeedfp implementation currently. |
Support being added in #8751 |
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you! |
🚀 The feature, motivation and pitch
Fp6 allows for models such llama 70b to fit in a single a100 GPU. Also 6bit is often the sweet spot between performance and speed. This was a paper from deep speed and is integrated into deepspeed-mii.
But they also have the code and kernels seperately
https://github.com/usyd-fsalab/fp6_llm
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: