[WIP] Deepseek V2 MLA #28690
Annotations
10 errors and 1 warning
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L9
vllm/attention/backends/flashinfer.py:9:81: E501 Line too long (100 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L13
vllm/attention/backends/flashinfer.py:13:38: F401 `vllm.vllm_flash_attn.flash_attn_varlen_func` imported but unused; consider using `importlib.util.find_spec` to test for availability
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L71
vllm/attention/backends/flashinfer.py:71:81: E501 Line too long (101 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L134
vllm/attention/backends/flashinfer.py:134:13: F841 Local variable `use_tensor_cores` is assigned to but never used
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L195
vllm/attention/backends/flashinfer.py:195:9: F841 Local variable `use_tensor_cores` is assigned to but never used
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L280
vllm/attention/backends/flashinfer.py:280:1: E402 Module level import not at top of file
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L380
vllm/attention/backends/flashinfer.py:380:81: E501 Line too long (130 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L408
vllm/attention/backends/flashinfer.py:408:81: E501 Line too long (126 > 80)
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L812
vllm/attention/backends/flashinfer.py:812:16: SIM300 Yoda condition detected
|
Analysing the code with ruff:
vllm/attention/backends/flashinfer.py#L813
vllm/attention/backends/flashinfer.py:813:16: SIM300 Yoda condition detected
|
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636
|
Loading