You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Actual Behavior:
(MagicQuill) C:\Users\VR2\Desktop\MagicQuill> python gradio_run.py
Total VRAM 24576 MB, total RAM 32684 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : native
Using pytorch cross attention
['C:\Users\VR2\Desktop\MagicQuill', 'C:\Users\VR2\.conda\envs\MagicQuill\python310.zip', 'C:\Users\VR2\.conda\envs\MagicQuill\DLLs', 'C:\Users\VR2\.conda\envs\MagicQuill\lib', 'C:\Users\VR2\.conda\envs\MagicQuill', 'C:\Users\VR2\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'C:\Users\VR2\Desktop\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 106, in inner_fn
validate_repo_id(arg_value)
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'C:\Users\VR2\Desktop\MagicQuill\models\llava-v1.5-7b-finetune-clean'.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\VR2\Desktop\MagicQuill\gradio_run.py", line 21, in
llavaModel = LLaVAModel()
File "C:\Users\VR2\Desktop\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "C:\Users\VR2\Desktop\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'C:\Users\VR2\Desktop\MagicQuill\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
(MagicQuill) C:\Users\VR2\Desktop\MagicQuill>
Additional Context/Details
Environment
OS: Windows
Version: 11
Any Relevant Dependencies:
The text was updated successfully, but these errors were encountered:
Expected Behavior:
Launch of program
Actual Behavior:
(MagicQuill) C:\Users\VR2\Desktop\MagicQuill> python gradio_run.py
Total VRAM 24576 MB, total RAM 32684 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : native
Using pytorch cross attention
['C:\Users\VR2\Desktop\MagicQuill', 'C:\Users\VR2\.conda\envs\MagicQuill\python310.zip', 'C:\Users\VR2\.conda\envs\MagicQuill\DLLs', 'C:\Users\VR2\.conda\envs\MagicQuill\lib', 'C:\Users\VR2\.conda\envs\MagicQuill', 'C:\Users\VR2\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'C:\Users\VR2\Desktop\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 106, in inner_fn
validate_repo_id(arg_value)
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'C:\Users\VR2\Desktop\MagicQuill\models\llava-v1.5-7b-finetune-clean'.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\VR2\Desktop\MagicQuill\gradio_run.py", line 21, in
llavaModel = LLaVAModel()
File "C:\Users\VR2\Desktop\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "C:\Users\VR2\Desktop\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "C:\Users\VR2.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'C:\Users\VR2\Desktop\MagicQuill\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.
(MagicQuill) C:\Users\VR2\Desktop\MagicQuill>
Additional Context/Details
Environment
The text was updated successfully, but these errors were encountered: