-
Notifications
You must be signed in to change notification settings - Fork 830
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
undefined symbol: cquantize_blockwise_fp16_fp4 #31
Comments
nd submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issuesbin /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so |
@DamonGuzman Looks like you did not build bitsandbytes with gpu support? It is loading the cpu and not cuda version.
|
That was actually really close to the problem I was having! I had created a new conda environment and needed to install cuda toolkit in that new environment. I always assumed cuda tool kit was a system-wide package but that doesn't seem to be the case. |
Just in case this is helpful for someone: If you get this with docker make sure to use an image with cuda toolkit installed, e.g.:
|
I solved this by "conda install cudatoolkit=11.7 -y". |
For me, I just replace libbitsandbytes_cpu.so to be libbitsandbytes_cuda117.so, which 117 is the cuda vision that I am now using. You may refer to this link: bitsandbytes-foundation/bitsandbytes#156 (comment) |
I faced the same issue today, turns it I had a version conflict between Cuda and PyTorch. A fresh install of Cuda and PyTorch did the trick for me. |
you can try this method:
then: ps : "CUDA version 117" is your cuda version, so here is " libbitsandbytes_cuda117.so" |
This method worked for me in WSL Ubuntu on Windows11. I installed anacoda3 and cuda118 special version for FineTuning Vicuna-13B with QloRA works well on RTX4090 |
i solved it by repip the "bitsandbytes" |
thank you very much, i solved the same problems as well with this function |
AttributeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 model = LlamaForCausalLM.from_pretrained("../hf_llama", device_map="auto", torch_dtype=torch.float16, load_in_4bit=True )
File ~/anaconda3/envs/qlora/lib/python3.11/site-packages/transformers/modeling_utils.py:2829, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
2819 if dtype_orig is not None:
2820 torch.set_default_dtype(dtype_orig)
2822 (
2823 model,
2824 missing_keys,
2825 unexpected_keys,
2826 mismatched_keys,
2827 offload_index,
2828 error_msgs,
-> 2829 ) = cls._load_pretrained_model(
2830 model,
2831 state_dict,
2832 loaded_state_dict_keys, # XXX: rename?
2833 resolved_archive_file,
2834 pretrained_model_name_or_path,
2835 ignore_mismatched_sizes=ignore_mismatched_sizes,
2836 sharded_metadata=sharded_metadata,
2837 _fast_init=_fast_init,
2838 low_cpu_mem_usage=low_cpu_mem_usage,
...
--> 394 func = self._FuncPtr((name_or_ordinal, self))
395 if not isinstance(name_or_ordinal, int):
396 func.name = name_or_ordinal
AttributeError: /home/server/anaconda3/envs/qlora/lib/python3.11/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cquantize_blockwise_fp16_fp4
The text was updated successfully, but these errors were encountered: