You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
环境windows11,conda虚拟环境运行,python3.8,完成了pip install 操作,运行python chat.py时报错。报错内容如下:
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
./lora-Vicuna/checkpoint-3000\adapter_model.bin
./lora-Vicuna/checkpoint-3000\pytorch_model.bin
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 33/33 [00:49<00:00, 1.49s/it]
Traceback (most recent call last):
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 99, in from_pretrained
config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './lora-Vicuna/checkpoint-3000'. Use repo_type argument if needed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".\chat.py", line 81, in
model = SteamGenerationMixin.from_pretrained(
File "I:\Chinese-Vicuna\utils.py", line 670, in from_pretrained
config = LoraConfig.from_pretrained(model_id)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 101, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at './lora-Vicuna/checkpoint-3000'
不管用不用梯子都不行,重复多次,问题依旧。
The text was updated successfully, but these errors were encountered:
@googlebox007 这个问题可以参见我们readme中的How to use的如何使用generate那一段,因为中间过程的checkpoint保存的是pytorch_model,同时不会保存adapter_config,要将训练时候的adapter_config复制过来,同时将模型名字改为adapter_model才能正常使用,我们在我们的脚本中会自动复制和修改
环境windows11,conda虚拟环境运行,python3.8,完成了pip install 操作,运行python chat.py时报错。报错内容如下:
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
./lora-Vicuna/checkpoint-3000\adapter_model.bin
./lora-Vicuna/checkpoint-3000\pytorch_model.bin
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 33/33 [00:49<00:00, 1.49s/it]
Traceback (most recent call last):
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 99, in from_pretrained
config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './lora-Vicuna/checkpoint-3000'. Use
repo_type
argument if needed.During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".\chat.py", line 81, in
model = SteamGenerationMixin.from_pretrained(
File "I:\Chinese-Vicuna\utils.py", line 670, in from_pretrained
config = LoraConfig.from_pretrained(model_id)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 101, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at './lora-Vicuna/checkpoint-3000'
不管用不用梯子都不行,重复多次,问题依旧。
The text was updated successfully, but these errors were encountered: