Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

load local model sd3 error with diffuser backend #4144

Open
JarHMJ opened this issue Nov 14, 2024 · 0 comments
Open

load local model sd3 error with diffuser backend #4144

JarHMJ opened this issue Nov 14, 2024 · 0 comments
Labels
bug Something isn't working unconfirmed

Comments

@JarHMJ
Copy link

JarHMJ commented Nov 14, 2024

LocalAI version:

localai/localai:master-cublas-cuda12

Environment, CPU architecture, OS, and Version:

Linux worker-node-2 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Describe the bug

load local model sd3 error with diffuser backend

To Reproduce

this is config

backend: diffusers
diffusers:
  cuda: true
  enable_parameters: negative_prompt,num_inference_steps
  pipeline_type: StableDiffusion3Pipeline
f16: false
name: stable-diffusion-3-medium
parameters:
  model: stable-diffusion-3-medium-diffusers
step: 2

stable-diffusion-3-medium-diffusers is a floder including https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers

Expected behavior

Logs

@@@@@
Skipping rebuild
@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DGGML_F16C=OFF -DGGML_AVX512=OFF -DGGML_AVX2=OFF -DGGML_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also https://github.com/go-skynet/LocalAI/issues/288
@@@@@
CPU info:
model name      : Intel(R) Xeon(R) Platinum 8458P
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology tsc_reliable nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid cldemote movdiri movdir64b fsrm md_clear serialize amx_bf16 avx512_fp16 amx_tile amx_int8 flush_l1d arch_capabilities
CPU:    AVX    found OK
CPU:    AVX2   found OK
CPU:    AVX512 found OK
@@@@@
9:53AM INF env file found, loading environment variables from file envFile=.env
9:53AM DBG Setting logging to debug
9:53AM INF Starting LocalAI using 32 threads, with models path: /build/models
9:53AM INF LocalAI version: b36ced8 (b36ced8681a4352b962bba1ac42b06e25aca1569)
9:53AM DBG CPU capabilities: [3dnowprefetch abm adx aes amx_bf16 amx_int8 amx_tile apic arat arch_capabilities arch_perfmon avx avx2 avx512_bf16 avx512_bitalg avx512_fp16 avx512_vbmi2 avx512_vnni avx512_vpopcntdq avx512bw avx512cd avx512dq avx512f avx512ifma avx512vbmi avx512vl avx_vnni bmi1 bmi2 cldemote clflush clflushopt clwb cmov constant_tsc cpuid cx16 cx8 de erms f16c flush_l1d fma fpu fsgsbase fsrm fxsr gfni ht ibpb ibrs ibrs_enhanced invpcid invpcid_single lahf_lm lm mca mce md_clear mmx movbe movdir64b movdiri msr mtrr nonstop_tsc nopl nx ospke pae pat pcid pclmulqdq pdpe1gb pge pku pni popcnt pse pse36 rdpid rdrand rdseed rdtscp rep_good sep serialize sha_ni smap smep ss ssbd sse sse2 sse4_1 sse4_2 ssse3 stibp syscall tsc tsc_adjust tsc_deadline_timer tsc_known_freq tsc_reliable umip vaes vme vpclmulqdq wbnoinvd x2apic xgetbv1 xsave xsavec xsaveopt xsaves xtopology]
9:53AM DBG GPU count: 2
9:53AM DBG GPU: card #0  [affined to NUMA node 0]@0000:03:00.0 -> driver: 'nvidia' class: 'Display controller' vendor: 'NVIDIA Corporation' product: 'unknown'
9:53AM DBG GPU: card #1  [affined to NUMA node 0]@0000:03:01.0 -> driver: 'nvidia' class: 'Display controller' vendor: 'NVIDIA Corporation' product: 'unknown'
9:53AM DBG guessDefaultsFromFile: not a GGUF file
9:53AM INF Preloading models from /build/models

  Model name: stable-diffusion-3-medium                                       


9:53AM DBG Model: stable-diffusion-3-medium (config: {PredictionOptions:{Model:stable-diffusion-3-medium-diffusers Language: Translate:false N:0 TopP:0xc00117e870 TopK:0xc00117e878 Temperature:0xc00117e880 Maxtokens:0xc00117e8b0 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc00117e8a8 TypicalP:0xc00117e8a0 Seed:0xc00117e8c8 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:stable-diffusion-3-medium F16:0xc00117e75a Threads:0xc00117e860 Debug:0xc00117e8c0 Roles:map[] Embeddings:0xc00117e8c1 Backend:diffusers TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:<nil> Multimodal:} KnownUsecaseStrings:[] KnownUsecases:<nil> PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc00117e898 MirostatTAU:0xc00117e890 Mirostat:0xc00117e888 NGPULayers:0xc00117e8b8 MMap:0xc00117e8c0 MMlock:0xc00117e8c1 LowVRAM:0xc00117e8c1 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc00117e858 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:true PipelineType:StableDiffusion3Pipeline SchedulerType: EnableParameters:negative_prompt,num_inference_steps CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:2 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:})
9:53AM DBG Extracting backend assets files to /tmp/localai/backend_data
9:53AM DBG processing api keys runtime update
9:53AM DBG processing external_backends.json
9:53AM DBG external backends loaded from external_backends.json
9:53AM INF core/startup process completed!
9:53AM DBG No configuration file found at /tmp/localai/upload/uploadedFiles.json
9:53AM DBG No configuration file found at /tmp/localai/config/assistants.json
9:53AM DBG No configuration file found at /tmp/localai/config/assistantsFile.json
9:53AM INF LocalAI API is listening! Please connect to the endpoint for API documentation. endpoint=http://0.0.0.0:8080
9:54AM DBG Request received: {"model":"stable-diffusion-3-medium","language":"","translate":false,"n":0,"top_p":null,"top_k":null,"temperature":null,"max_tokens":null,"echo":false,"batch":0,"ignore_eos":false,"repeat_penalty":0,"repeat_last_n":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","size":"1024x1024","prompt":"A cat holding a sign that says hello world","instruction":"","input":null,"stop":null,"messages":null,"functions":null,"function_call":null,"stream":false,"mode":0,"step":25,"grammar":"","grammar_json_functions":null,"backend":"","model_base_name":""}
9:54AM DBG Loading model: stable-diffusion-3-medium
9:54AM DBG guessDefaultsFromFile: not a GGUF file
9:54AM DBG Parameter Config: &{PredictionOptions:{Model:stable-diffusion-3-medium-diffusers Language: Translate:false N:0 TopP:0xc00117e870 TopK:0xc00117e878 Temperature:0xc00117e880 Maxtokens:0xc00117e8b0 Echo:false Batch:0 IgnoreEOS:false RepeatPenalty:0 RepeatLastN:0 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc00117e8a8 TypicalP:0xc00117e8a0 Seed:0xc00117e8c8 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:stable-diffusion-3-medium F16:0xc00117e75a Threads:0xc00117e860 Debug:0xc00117f330 Roles:map[] Embeddings:0xc00117e8c1 Backend:diffusers TemplateConfig:{Chat: ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false JoinChatMessagesByCharacter:<nil> Multimodal:} KnownUsecaseStrings:[] KnownUsecases:<nil> PromptStrings:[A cat holding a sign that says hello world] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: ResponseFormat: ResponseFormatMap:map[] FunctionsConfig:{DisableNoAction:false GrammarConfig:{ParallelCalls:false DisableParallelNewLines:false MixedMode:false NoMixedFreeString:false NoGrammar:false Prefix: ExpectStringsAfterJSON:false PropOrder: SchemaType:} NoActionFunctionName: NoActionDescriptionName: ResponseRegex:[] JSONRegexMatch:[] ReplaceFunctionResults:[] ReplaceLLMResult:[] CaptureLLMResult:[] FunctionNameKey: FunctionArgumentsKey:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc00117e898 MirostatTAU:0xc00117e890 Mirostat:0xc00117e888 NGPULayers:0xc00117e8b8 MMap:0xc00117e8c0 MMlock:0xc00117e8c1 LowVRAM:0xc00117e8c1 Grammar: StopWords:[] Cutstrings:[] ExtractRegex:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc00117e858 NUMA:false LoraAdapter: LoraBase: LoraAdapters:[] LoraScales:[] LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: LoadFormat: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj: FlashAttention:false NoKVOffloading:false RopeScaling: ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:true PipelineType:StableDiffusion3Pipeline SchedulerType: EnableParameters:negative_prompt,num_inference_steps CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:2 GRPC:{Attempts:0 AttemptsSleepTime:0} TTSConfig:{Voice: VallE:{AudioPath:}} CUDA:false DownloadFiles:[] Description: Usage:}
9:54AM INF Loading model 'stable-diffusion-3-medium' with backend diffusers
9:54AM DBG Loading model in memory from file: /build/models/stable-diffusion-3-medium-diffusers
9:54AM DBG Loading Model stable-diffusion-3-medium with gRPC (file: /build/models/stable-diffusion-3-medium-diffusers) (backend: diffusers): {backendString:diffusers model:stable-diffusion-3-medium-diffusers modelID:stable-diffusion-3-medium assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc000022a08 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh openvoice:/build/backend/python/openvoice/run.sh parler-tts:/build/backend/python/parler-tts/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
9:54AM DBG Loading external backend: /build/backend/python/diffusers/run.sh
9:54AM DBG external backend is file: &{name:run.sh size:73 mode:493 modTime:{wall:0 ext:63867008110 loc:0x50c033e0} sys:{Dev:1048803 Ino:3235889245 Nlink:1 Mode:33261 Uid:0 Gid:0 X__pad0:0 Rdev:0 Size:73 Blksize:4096 Blocks:8 Atim:{Sec:1731411310 Nsec:0} Mtim:{Sec:1731411310 Nsec:0} Ctim:{Sec:1731488608 Nsec:518016885} X__unused:[0 0 0]}}
9:54AM DBG Loading GRPC Process: /build/backend/python/diffusers/run.sh
9:54AM DBG GRPC Service for stable-diffusion-3-medium will be running at: '127.0.0.1:42163'
9:54AM DBG GRPC Service state dir: /tmp/go-processmanager2065713050
9:54AM DBG GRPC Service Started
9:54AM DBG Wait for the service to start up
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stdout Initializing libbackend for diffusers
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stdout virtualenv activated
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stdout activated virtualenv has been ensured
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr /build/backend/python/diffusers/venv/lib/python3.10/site-packages/transformers/utils/hub.py:128: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr   warnings.warn(
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr Server started. Listening on: 127.0.0.1:42163
9:54AM DBG GRPC Service Ready
9:54AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:stable-diffusion-3-medium-diffusers ContextSize:512 Seed:2058506972 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:99999999 MainGPU: TensorSplit: Threads:32 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/build/models/stable-diffusion-3-medium-diffusers Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType:StableDiffusion3Pipeline SchedulerType: CUDA:true CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 LoadFormat: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type: FlashAttention:false NoKVOffload:false ModelPath:/build/models LoraAdapters:[] LoraScales:[]}
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr Loading model stable-diffusion-3-medium-diffusers...
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr Request Model: "stable-diffusion-3-medium-diffusers"
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr ContextSize: 512
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr Seed: 2058506972
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr NBatch: 512
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr MMap: true
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr NGPULayers: 99999999
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr Threads: 32
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr ModelFile: "/build/models/stable-diffusion-3-medium-diffusers"
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr PipelineType: "StableDiffusion3Pipeline"
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr CUDA: true
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr ModelPath: "/build/models"
9:54AM DBG GRPC(stable-diffusion-3-medium-127.0.0.1:42163): stderr 
9:54AM ERR Server error error="failed to load model with internal loader: could not load model (no success): Unexpected err=ValueError('Invalid `pretrained_model_name_or_path` provided. Please set it to a valid URL.'), type(err)=<class 'ValueError'>" ip=10.33.2.64 latency=10.026025313s method=POST status=500 url=/v1/images/generations

Additional context

@JarHMJ JarHMJ added bug Something isn't working unconfirmed labels Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

1 participant