-
Notifications
You must be signed in to change notification settings - Fork 432
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New GEMM kernels for weight-only quantization #2090
Conversation
The main branch has |
Verified LMDeploy fp16 and AWQ on H100 with @lzhangzz , both are blazing fast. |
TODO:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall LGTM and just need to solve some minor issues with the H100 mentioned above.
f16*u4g128 vs cublasGemmEx f16*f16, both using HMMA + f32 accumulator, on 32 weight matrices from 8 models range from 7B to 72B