-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using LLamaPro and LORA gives error: KeyError: 'train.num_layer_trainable' #4705
Comments
Doing the expansion with the llama pro script in the library fixed the issue. I was initially doing the expansion manually. |
@hiyouga Apparently the error persists. I just check the lora finetune i did, i hadn't checked the llamapro box. Checkign it, makes the error return. |
To create a public link, set |
fixed |
Worked. Thanks a lot mate. You're a lifesaver. Working on my thesis with LlamaFactory. Could you by any chance check this one out aswell? Traceback (most recent call last): |
try reinstall unsloth |
Reinstalling unsloth, as per the guidelines in the unsloth library. Provided the error below, which didnt fix even after removing and reinstalling both bitsandbytes and unsloth.
|
i thought unsloth may be incompatible with llama pro |
Reminder
System Info
Latest version, Ubuntu 24.04
Reproduction
Run Llama Pro and Lora together for finetuning on a model with expanded blocks.
Expected behavior
It should run normally. and start training the model.
Others
No response
The text was updated successfully, but these errors were encountered: