Merging a lora into a model results in an error thrown by safetensors package #3238
Closed
1 task done
Labels
solved
This problem has been already solved
Reminder
Reproduction
python src/export_model.py --model_name_or_path "basemodel1" --adapter_name_or_path "checkpoint1" --template default --export_dir "export1" --export_size 2
Expected behavior
I was hoping the 7B model lora would merge with the base model, as I was able to train a small lora within 16GB VRAM.
The warnings indicate that some tasks were offloaded to the cpu, but safetensors didn't implement it? Not sure how to work around this.
Although the filesystem was under cygwin, I ran the script from Windows command line. I cloned the repo from yesterday or so. I and using current gaming Nvidia drivers, with the option to swap into conventional memory enabled (to enable slow swapping instead of crashing).
System Info
transformers
version: 4.39.3Others
No response
The text was updated successfully, but these errors were encountered: