Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

python chat.py时报错 #56

Closed
googlebox007 opened this issue Apr 10, 2023 · 2 comments
Closed

python chat.py时报错 #56

googlebox007 opened this issue Apr 10, 2023 · 2 comments

Comments

@googlebox007
Copy link

环境windows11,conda虚拟环境运行,python3.8,完成了pip install 操作,运行python chat.py时报错。报错内容如下:
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
warn("The installed version of bitsandbytes was compiled without GPU support. "
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'LLaMATokenizer'.
The class this function is called from is 'LlamaTokenizer'.
./lora-Vicuna/checkpoint-3000\adapter_model.bin
./lora-Vicuna/checkpoint-3000\pytorch_model.bin
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████| 33/33 [00:49<00:00, 1.49s/it]
Traceback (most recent call last):
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 99, in from_pretrained
config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 112, in _inner_fn
validate_repo_id(arg_value)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': './lora-Vicuna/checkpoint-3000'. Use repo_type argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File ".\chat.py", line 81, in
model = SteamGenerationMixin.from_pretrained(
File "I:\Chinese-Vicuna\utils.py", line 670, in from_pretrained
config = LoraConfig.from_pretrained(model_id)
File "C:\Users\tuzhiyong.conda\envs\python38\lib\site-packages\peft\utils\config.py", line 101, in from_pretrained
raise ValueError(f"Can't find '{CONFIG_NAME}' at '{pretrained_model_name_or_path}'")
ValueError: Can't find 'adapter_config.json' at './lora-Vicuna/checkpoint-3000'

不管用不用梯子都不行,重复多次,问题依旧。

@Facico
Copy link
Owner

Facico commented Apr 11, 2023

@googlebox007 这个问题可以参见我们readme中的How to use的如何使用generate那一段,因为中间过程的checkpoint保存的是pytorch_model,同时不会保存adapter_config,要将训练时候的adapter_config复制过来,同时将模型名字改为adapter_model才能正常使用,我们在我们的脚本中会自动复制和修改

@yuxuan2015
Copy link

@googlebox007 把chat.sh第5行的USE_LOCAL前面加个$

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants