Skip to content

Fix handling of dynamic FP8 grouped gemm on Nvidia #8985

Fix handling of dynamic FP8 grouped gemm on Nvidia

Fix handling of dynamic FP8 grouped gemm on Nvidia #8985

Triggered via pull request February 3, 2025 01:19
Status Success
Total duration 1h 54m 1s
Artifacts 2
generate-matrix  /  generate
7s
generate-matrix / generate
Matrix: build
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
pytorch_FBGEMM__3.9_cpu_aarch64
2.7 MB
pytorch_FBGEMM__3.9_cu126_aarch64
446 MB