Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: No device of requested type available #12701

Open
Wait1997 opened this issue Jan 12, 2025 · 9 comments
Open

Error: No device of requested type available #12701

Wait1997 opened this issue Jan 12, 2025 · 9 comments
Assignees

Comments

@Wait1997
Copy link

run: ./ollama run qwen2.5-coder:0.5b

Error: llama runner process has terminated: error loading model: No device of requested type available. Please check https://software.intel.com/content/www/us/en/develop/articles/intel-oneapi-dpcpp-system-requirements.html -1 (PI_ERROR_DEVICE_NOT_FOUND)

  • ollma version:
    ollama version is 0.4.6-ipexllm-20250105
  • ipex-llm[cpp] version
Package                 Version
----------------------- --------------
accelerate              0.33.0
bigdl-core-cpp          2.6.0b20250105
certifi                 2024.12.14
charset-normalizer      3.4.1
colorama                0.4.6
dpcpp-cpp-rt            2024.2.1
filelock                3.16.1
fsspec                  2024.12.0
gguf                    0.14.0
huggingface-hub         0.27.1
idna                    3.10
impi-rt                 2021.14.1
intel-cmplr-lib-rt      2024.2.1
intel-cmplr-lib-ur      2024.2.1
intel-cmplr-lic-rt      2024.2.1
intel-opencl-rt         2024.2.1
intel-openmp            2024.2.1
intel-sycl-rt           2024.2.1
ipex-llm                2.2.0b20250105
Jinja2                  3.1.5
MarkupSafe              3.0.2
mkl                     2024.2.1
mkl-dpcpp               2024.2.1
mpmath                  1.3.0
networkx                3.4.2
numpy                   1.26.4
onednn                  2024.2.1
onednn-devel            2024.2.1
onemkl-sycl-blas        2024.2.1
onemkl-sycl-datafitting 2024.2.1
onemkl-sycl-dft         2024.2.1
onemkl-sycl-lapack      2024.2.1
onemkl-sycl-rng         2024.2.1
onemkl-sycl-sparse      2024.2.1
onemkl-sycl-stats       2024.2.1
onemkl-sycl-vm          2024.2.1
packaging               24.2
pip                     24.2
protobuf                4.25.5
psutil                  6.1.1
PyYAML                  6.0.2
regex                   2024.11.6
requests                2.32.3
safetensors             0.5.2
sentencepiece           0.1.99
setuptools              75.1.0
sympy                   1.13.3
tbb                     2021.13.1
tcmlib                  1.2.0
tokenizers              0.19.1
torch                   2.2.0
tqdm                    4.67.1
transformers            4.44.2
typing_extensions       4.12.2
umf                     0.9.1
urllib3                 2.3.0
wheel                   0.44.0

system info:

windows
设备名称	DESKTOP-E86TOAU
处理器	Intel(R) Core(TM) Ultra 5 125H   3.60 GHz
机带 RAM	16.0 GB (15.6 GB 可用)
设备 ID	6CC66654-D557-4672-8A6D-1B4B12C6109E
产品 ID	00326-70000-00001-AA517
系统类型	64 位操作系统, 基于 x64 的处理器

gpu info:

  • Intel(R) Arc(TM) Graphics
  • GPU驱动程序版本: 32.0.101.6449

Intel° oneAPl Base Toolkit Info

  • C:\Program Files (x86)\Intel\oneAPI\2024.2
  • C:\Program Files (x86)\Intel\oneAPI\2025.0
@sgwhat sgwhat self-assigned this Jan 13, 2025
@sgwhat
Copy link
Contributor

sgwhat commented Jan 13, 2025

Hi @Wait1997, I believe Ollama is unable to run on your device due to multiple versions (2025.0) of oneAPI being installed simultaneously. Please refer to our ollama quickstart and llama.cpp quickstart to prepare the environment.

@Wait1997
Copy link
Author

Wait1997 commented Jan 13, 2025

Hi @sgwhat,I uninstalled versions (2025.0) of oneAPI as you said and use 2024.02 versions, but it seems I still encounter the same problem

Error: llama runner process has terminated: error loading model: No device of requested type available. Please check https://software.intel.com/content/www/us/en/develop/articles/intel-oneapi-dpcpp-system-requirements.html -1 (PI_ERROR_DEVICE_NOT_FOUND)
I configured it completely according to the documentation, and the version is consistent with the documentation. The document address used is Run Ollama with IPEX-LLM on Intel GPU

  • I chose to reinstall ipex-ollama and now a new problem has occurred. When I ./ollama --version
    ollama version is 0.3.6-ipexllm-20250113
    Warning: client version is 0.5.1-ipexllm-20250112
    I still don't know why this is happening.

@sgwhat
Copy link
Contributor

sgwhat commented Jan 13, 2025

Please ensure that you have only one ollama serve process running. You can run set OLLAMA_HOST=0.0.0.0 in the current directory.

@Wait1997
Copy link
Author

Thanks, but it still doesn't seem to work.

@sgwhat
Copy link
Contributor

sgwhat commented Jan 13, 2025

I am a bit confused 😂, could you please provide more detailed logs?

@Wait1997
Copy link
Author

Wait1997 commented Jan 13, 2025

I provide you with a document where I recorded this issue. You can take a look at the specific information in it. Intel GPU 上 IPEX-LLM,You can look at this link.

@sgwhat
Copy link
Contributor

sgwhat commented Jan 14, 2025

As my suggestion, you can reinstall ipex-llm ollama in a fresh conda environment, and then run init-ollama.bat in a new directory.

@Wait1997
Copy link
Author

As my suggestion, you can reinstall ipex-llm ollama in a fresh conda environment, and then run init-ollama.bat in a new directory.

Yes, I did reinstall yesterday, but after reinstalling, ollama --version still prompts that the versions are inconsistent. I also mentioned it in the above question; is it possible that the version is too new?

when ollama --version will show the following
ollama version is 0.3.6-ipexllm-20250113
Warning: client version is 0.5.1-ipexllm-20250112

@sgwhat
Copy link
Contributor

sgwhat commented Jan 14, 2025

As my previous comment in #12701 (comment), I don't think this inconsistent version issue is caused by ipex-llm ollama itself. I think you may check if any other ollama process (not ipex-llm ollama, like ollama upstream) is still running on your device.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants