Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Windows 11, new user/installer - OSError: Incorrect path_or_model_id #107

Open
17CS opened this issue Jan 17, 2025 · 1 comment
Open

Windows 11, new user/installer - OSError: Incorrect path_or_model_id #107

17CS opened this issue Jan 17, 2025 · 1 comment

Comments

@17CS
Copy link

17CS commented Jan 17, 2025

(I'm Installing on windows but, did not use powershell)
Followed instructions from this video but, on the setup page things have changed so I went through step by step
As far as I know, everything was installed without errors, etc.
https://youtu.be/wTIfxdFoXm8?si=LHkdpVu6_atC5CMa

To startup MagicQuill in command prompt I used

C:\AI-Installers\MagicQuill>conda activate MagicQuill
(MagicQuill) C:\AI-Installers\MagicQuill>set CUDA_VISIBLE_DEVICES=0 && python gradio_run.py

Then I got this Error and I don't know how to fix it/make it work.

Total VRAM 16376 MB, total RAM 130831 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : native
Using pytorch cross attention
['C:\AI-Installers\MagicQuill', 'C:\Users\colto\.conda\envs\MagicQuill\python310.zip', 'C:\Users\colto\.conda\envs\MagicQuill\DLLs', 'C:\Users\colto\.conda\envs\MagicQuill\lib', 'C:\Users\colto\.conda\envs\MagicQuill', 'C:\Users\colto\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'C:\AI-Installers\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 106, in inner_fn
validate_repo_id(arg_value)
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '
', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'C:\AI-Installers\MagicQuill\models\llava-v1.5-7b-finetune-clean'.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\AI-Installers\MagicQuill\gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "C:\AI-Installers\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "C:\AI-Installers\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'C:\AI-Installers\MagicQuill\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

(MagicQuill) C:\AI-Installers\MagicQuill>

SO then I tried this

(MagicQuill) C:\AI-Installers\MagicQuill>python gradio_run.py

And I got this error, please help.

Total VRAM 16376 MB, total RAM 130831 MB
pytorch version: 2.1.2+cu118
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4080 SUPER : native
Using pytorch cross attention
['C:\AI-Installers\MagicQuill', 'C:\Users\colto\.conda\envs\MagicQuill\python310.zip', 'C:\Users\colto\.conda\envs\MagicQuill\DLLs', 'C:\Users\colto\.conda\envs\MagicQuill\lib', 'C:\Users\colto\.conda\envs\MagicQuill', 'C:\Users\colto\.conda\envs\MagicQuill\lib\site-packages', 'editable.llava-1.2.2.post1.finder.path_hook', 'C:\AI-Installers\MagicQuill\MagicQuill']
Traceback (most recent call last):
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 385, in cached_file
resolved_file = hf_hub_download(
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 106, in inner_fn
validate_repo_id(arg_value)
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\huggingface_hub\utils_validators.py", line 160, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must use alphanumeric chars or '-', '
', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'C:\AI-Installers\MagicQuill\models\llava-v1.5-7b-finetune-clean'.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "C:\AI-Installers\MagicQuill\gradio_run.py", line 24, in
llavaModel = LLaVAModel()
File "C:\AI-Installers\MagicQuill\MagicQuill\llava_new.py", line 26, in init
self.tokenizer, self.model, self.image_processor, self.context_len = load_pretrained_model(
File "C:\AI-Installers\MagicQuill\MagicQuill\LLaVA\llava\model\builder.py", line 116, in load_pretrained_model
tokenizer = AutoTokenizer.from_pretrained(model_path, use_fast=False)
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 758, in from_pretrained
tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\models\auto\tokenization_auto.py", line 590, in get_tokenizer_config
resolved_config_file = cached_file(
File "C:\Users\colto.conda\envs\MagicQuill\lib\site-packages\transformers\utils\hub.py", line 450, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: 'C:\AI-Installers\MagicQuill\models\llava-v1.5-7b-finetune-clean'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

(MagicQuill) C:\AI-Installers\MagicQuill>

@Looz-Ashae
Copy link

Check the issue
#54

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants