-
Notifications
You must be signed in to change notification settings - Fork 84
Issues: vllm-project/llm-compressor
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
AttributeError: module 'torch' has no attribute 'OutOfMemoryError'
bug
Something isn't working
#1164
opened Feb 18, 2025 by
alexmadey-oc
Is it supported to quantize attention to fp8 with calibration?
question
Further information is requested
#1158
opened Feb 16, 2025 by
YSF-A
When quantizing gemma2 in W8A8 format, the input is not positive-definite and gemma2-27B cannot be quantized.
bug
Something isn't working
#1152
opened Feb 14, 2025 by
HelloCard
[Clarification] Regarding KV Cache quantization and FP8 Scales
question
Further information is requested
#1104
opened Jan 27, 2025 by
nelaturuharsha
Add support for W8A8 quantization with CPU weight offloading
enhancement
New feature or request
#1078
opened Jan 17, 2025 by
NeoChen1024
[Bug] SparseGPTModifier with OutputDistillationModifier
bug
Something isn't working
#1058
opened Jan 11, 2025 by
Thunderbeee
Does llmcompressor support hybrid sparsity?
enhancement
New feature or request
#1037
opened Jan 6, 2025 by
jiangjiadi
quant method about kv cache
bug
Something isn't working
#1024
opened Jan 2, 2025 by
sitabulaixizawaluduo
About lora finetuning of 2:4 sparse and sparse quant models
enhancement
New feature or request
#952
opened Dec 4, 2024 by
arunpatala
Perplexity (ppl) Calculation of Local Sparse Model: NaN issue
bug
Something isn't working
#853
opened Oct 19, 2024 by
HengJayWang
[USAGE] FP8 W8A8 (+KV) with LORA Adapters
enhancement
New feature or request
#164
opened Sep 11, 2024 by
paulliwog
ProTip!
Exclude everything labeled
bug
with -label:bug.