Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: MQLLMEgine Error on Apple Silicon M4 Pro #11863

Closed
1 task done
h14turbo opened this issue Jan 8, 2025 · 2 comments
Closed
1 task done

[Bug]: MQLLMEgine Error on Apple Silicon M4 Pro #11863

h14turbo opened this issue Jan 8, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@h14turbo
Copy link

h14turbo commented Jan 8, 2025

Your current environment

The output of `python collect_env.py`
Your output of `python collect_env.py` here

Model Input Dumps

No response

🐛 Describe the bug

I receive the following error when calling VLLM which is stop on MacOS on my M4 Pro Mac mini with 64GB RAM. The same error occurs on my M1 Ultra Mac Studio. I have followed the installing steps for macOS correctly, including reinstalling Xcode command line tools. The errors happens instantly when calling the server, and occurs with any model I have tried.

INFO 01-08 18:58:57 logger-py:37] Received request cmpl-e4776fdb878c4500912374ad23cd2785-0: prompt:
'User: Why is the sky blue?\nAssistant:',
params: SamplingParams (n=1,
presence_penalty=0.0,
frequency_penalty=0.0,
repetition_penalty=1.0,
=-1-
min_p=0.0, seed=None,
=None,
stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False,
max tokens=100,
guided_decoding=None),
374,
279,
ignore
eos=False,
min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True,'
temperature=1.0, top_p=1.0, top_k
spaces_between_special_tokens=True,
truncate_prompt_token
prompt_token_ids: [128000,
1502, 25, 8595,
13180, 6437,
5380, 72803, 25J,
lora_request: None, prompt_adapter_request: None.
INFO 01-08 18:58:57 engine.py:267] Added request cmpl-e4776fdb878c4500912374ad23cd2785-0.
CRITICAL 01-08 18:58:57 launcher.py:99] MQLLMEngine is already dead,
terminating server process
INru.
10.2.4.193:54818 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
ERROR 01-08 18:58:57 engine-py:135] AttributeError("'SiluAndMul'
object has no attribute 'op'")
ERROR
01-08 18:58:57 engine.pv:135 Traceback (most recent call last:
01-08 18:58:57 engine-py:135]
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/engine/multiprocessing/engine.py", line 133, in start
ERHUHI
01-08 18:58:57 engine-py:135]
self.run engine loope
ERROR
01-08 18:58:57
engine-py:135]
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/engine/multiprocessing/engine.py", line 196, in run_engine_loop
engine. py:135]
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/engine/multiprocessing/engine.py", line 214, in engine_step
ERROR
ERRORI
ERRORI
engine.py:135]
18:58:67 engine.py:135J
01-08 18:58:57
engine. py:135]
18:58:57 engine.py:135J
01-08 18:58:57
engine.py:135]
18:58:57 engine-py:135]
engine-py:135]
ERROR 01-08 18:58:57 engine-py:135]
ERROR
01-08 18:58:57 engine.py:135]
ERROR 61-68 18:58:57 engine.py:135J
ERROR
01-08 18:58:57
engine. py:135]
ERROR 01-08 18:58:57 engine-py:135]
ERROR
01-08 18:58:57
engine.py:135]
ERROR 01-08 18:58:57 engine.py:135]
ERROR
01-08 18:58:57
engine.py:135]
ERROR 01-08 18:58:57
ERROR
engine.py:135]
01-08 18:58:57
engine.py:135]
ERROR 01-08 18:58:57
engine.py:135]
ERROR 01-08 18:58:57
ERROR
engine.py:135]
01-08 18:58:57
engine.py:135]
01-00 10100101
ERROR
01-08 18:58:57
engine.py:135]
EAKUN
0=0010:00402
ERROR
0-0610:00402
engine.py:135]
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/engine/multiprocessing/engine-py", line 205, in engine_step
rerurn selt.enczne.sceo
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/engine/1lm_engine-py", line 1394, in step
outputs = self.model_executor.execute_modell
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/executor/cpu_executor.py", line 201, in execute_model
output = self.driver_method_invoker(self.driver_worker,
File "/Users/ai_dev_mac_mini/Documents/vllm/v1lm/executor/cpu_executor.py", line 298, in _driver_method_invoker return getattr (driver, method) (*args,
**kwargs)
File "/Users/ai_dev_mac_mini/Documents/v1lm/v11m/worker/worker_base-py", line 344, in execute_model
output = self.model_runner.execute_model(
File "/Users/ai_dev_mac_mini/Documents/v11m/v1lm/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func (*args, **kwargs)
File "/Users/ai_dev_mac_mini/Documents/v1lm/v11m/worker/cpu_model_runner-py", line 530, in execute_model
hidden_states = model_executable(
File "/Users/ai_dev_mac_mini/Documents/v1lm/v11m/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(args,
**kwargs)
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/lib/python3.9/site-packages/torch/nn/modules/module-py", line 1747, in _call_impl return forward_call
args,
**kwargs)
File "/Users/ai_dev_mac_mini/Documents/v1lm/v11m/model_executor/models/1lama.py", line 569, in forward
model_output = self.model(input_ids,
positions,
kv_caches,
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/compilation/decorators.py", line 170, in __call.
return self. forward(*args,
**kwargs)
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/model_executor/models/1lama-py", line 360, in forward
hidden_states, residual = layer(positions,
hidden_states,
File "/Users/ai_dev_mac _mini/Documents/vllm/v11m/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
ERROR
engine. py:135]
return self._call_impl(*args, **kwargs)
ERHOR
File "/Users/ai_dev_mac_mini/Documents/v11m/vllm/lib/python3.9/site-packages/torch/nn/modules/module-py", line 1747, in call impl
ERROR
01-08 18:58:57
engine. py:135]
return forward_call(*args,
**kwargs)
OH
10.b0.r engthe.py:1sbJ
File "/Users/ai_dev_mac_mini/Documents/v1lm/v11m/model_executor/models/1lama.py", line 283, in forward
01-08 18:58:57
engine. py:135]
hidden_states = self.mlp(hidden_states)
18:60:0r engine.py:13bJ
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/lib/python3.9/site-packages/torch/nn/modules/module-py", line 1736, in _wrapped_call_impl
01-08 18:58:57
engine.py:135]
return self._call_impl(args, **kwargs)
ERRORI
61-68 18:58:5/ engine. py: 155J
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/lib/python3.9/site-packages/torch/nn/modules/module-py", line 1747, in _call_impl
engine. py:135]
return forward_call
args,
**kwargs)
ERRORI
61-68 18:58:57 engine. py: 155J
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/model_executor/models/1lama.py", line 93, in forward
5-48484684577
engine. py:135]
x = self.act_fn(x)
ERRORI
01-08 18:58:57 engine-py:135]
File "/Users/ai_dev_mac_mini/Documents/v1lm/vllm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
ERROR
engine. py:135]
return self._call_impl(*args, **kwargs)
ERROR
01-08 18:58:57 engine-py:135]
File "/Users/ai_dev_mac_mini/Documents/vllm/v1lm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
ERROR
61-68 18:58:57
engine. py:135]
return forward_call(*args,
ERROR
01-08 18:58:57 engine-py:135]
**kwargs)
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/model_executor/custom_op.py", line 24, in forward
ERROR
01-08 18:58:57
engine.py:135]
return self._forward_method(*args, **kwargs)
ERROR 01-08 18:58:57
ERROR
engine.py:135]
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/model_executor/custom_op.py", line 48, in forward_cpu
01-08 18:58:57
engine.py:135]
return self.forward_cuda (*args,
**kwargs)
ERROR 01-08 18:58:57
engine.py:135]
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/model_executor/layers/activation.py", line 79, in forward_cuda
ERROR 01-08 18:58:57
engine.py:135]
ERROR 01-08 18:58:57
self.op(out,
x)
engine.py:1351
File "/Users/ai_dev_mac_mini/Documents/v1lm/v1lm/lib/python3.9/site-packages/torch/nn/modules/module-py", line 1931, in __getattr
ERROR 01-08 18:58:57 engine-py:135J
LUISE ALCLIOUCCCLLOL
ERROR
01-08 18:58:57
engine.py:135] AttributeError: 'SiluAndMul' object has no attribute 'op'
INru.
shutting down
INFO:
Waiting for application shutdown.
nooncamon snuccown comorere.
INFO:
Finished server
process

I use the following python code to call the API:

from openai import OpenAI

client = OpenAI(
base_url="http://10.2.5.153:8000/v1",
api_key="token-abc123",
)

Convert chat input into a plain prompt for completion

prompt = "User: Why is the sky blue?\nAssistant:"

completion = client.completions.create(
model="NousResearch/Llama-3.2-1B",
prompt=prompt,
max_tokens=100,
)

print(completion.choices[0].text.strip())

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@h14turbo h14turbo added the bug Something isn't working label Jan 8, 2025
@mgoin
Copy link
Member

mgoin commented Jan 8, 2025

Hi @h14turbo thanks for reporting! I think this specific issue has just been fixed with #11836. Could you please try pulling main and trying again?

@h14turbo
Copy link
Author

h14turbo commented Jan 8, 2025

It is now working! Thank you very much for helping with this. Lifesaver!

@h14turbo h14turbo closed this as completed Jan 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants