-
-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Issues: unslothai/unsloth
[FIXED]
attention_mask = attention_mask.to(torch.bool)
#1704
opened Feb 14, 2025 by
torahoang
Open
7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Unsloth GRPO trainer error - IndexError: argmax(): Expected reduction dim 1 to have non-zero size.
#1817
opened Feb 24, 2025 by
w601sxs
Loading Fine-Tuned LLaMA with LoRA Using AutoModelForCausalLM – Are LoRA Weights Applied Correctly
#1814
opened Feb 24, 2025 by
Hyfred
Llama AttributeError: 'bool' object has no attribute 'all_special_tokens'
#1809
opened Feb 24, 2025 by
mattguida
KeyError: 'completion_length' in GRPO trainer
currently fixing
Am fixing now!
#1807
opened Feb 23, 2025 by
monk1337
Trainer automatically converts dataset to ChatML despite using Llama chat template in recent version
currently fixing
Am fixing now!
#1802
opened Feb 22, 2025 by
FOCUSLLM
VRAM spikes after "LlamaForCausalLM does not accept 'num_items_in_batch'"
#1801
opened Feb 22, 2025 by
RWTHEY
lowering model name when model downloaded makes additional downloading
#1798
opened Feb 22, 2025 by
Redix8
CUDA error: out of memory in WSL with 24G VRAM while 2/3 was still left unused
#1797
opened Feb 22, 2025 by
ja3592
support NPU?
feature request
Feature request pending on roadmap
#1794
opened Feb 22, 2025 by
RyanOvO
partially initialized module 'torchvision' has no attribute 'extension'
#1793
opened Feb 22, 2025 by
z-x-x136
[BUG] 'merge_4bit_forced' missing custom logic to save 4-bit dynamic quant
currently fixing
Am fixing now!
#1791
opened Feb 21, 2025 by
ColumbusAI
Previous Next
ProTip!
What’s not been updated in a month: updated:<2025-01-24.