-
-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: #7072
Labels
bug
Something isn't working
Comments
Your GPU is too old |
I am looking to purchase a Tesla K80 24GB graphics card recently. It is also quite old and I don't know when it will be supported |
Sorry, I only run the quantization model on Ampere GPUs. I am not familiar with |
Ok , thank you |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Your current environment
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.35
Python version: 3.8.0 | packaged by conda-forge | (default, Nov 22 2019, 19:11:38) [GCC 7.3.0] (64-bit runtime)
Python platform: Linux-6.5.0-45-generic-x86_64-with-glibc2.10
Is CUDA available: True
CUDA runtime version: 12.1.66
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1050
Nvidia driver version: 535.183.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
架构: x86_64
CPU 运行模式: 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
字节序: Little Endian
CPU: 4
在线 CPU 列表: 0-3
厂商 ID: GenuineIntel
型号名称: Intel(R) Pentium(R) Gold G5400 CPU @ 3.70GHz
CPU 系列: 6
型号: 158
每个核的线程数: 2
每个座的核数: 2
座: 1
步进: 10
CPU 最大 MHz: 3700.0000
CPU 最小 MHz: 800.0000
BogoMIPS: 7399.70
标记: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust smep erms invpcid mpx rdseed smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
虚拟化: VT-x
L1d 缓存: 64 KiB (2 instances)
L1i 缓存: 64 KiB (2 instances)
L2 缓存: 512 KiB (2 instances)
L3 缓存: 4 MiB (1 instance)
NUMA 节点: 1
NUMA 节点0 CPU: 0-3
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Mitigation; Microcode
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] nvidia-nccl-cu12==2.19.3
[pip3] torch==2.2.1
[pip3] torchaudio==2.4.0
[pip3] torchvision==0.19.0
[pip3] transformers==4.42.4
[pip3] transformers-stream-generator==0.0.4
[pip3] triton==2.2.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.19.3 pypi_0 pypi
[conda] torch 2.2.1 pypi_0 pypi
[conda] torchaudio 2.4.0 pypi_0 pypi
[conda] torchvision 0.19.0 pypi_0 pypi
[conda] transformers 4.43.3 pypi_0 pypi
[conda] transformers-stream-generator 0.0.4 pypi_0 pypi
[conda] triton 2.2.0 pypi_0 pypi
[conda] vllm-nccl-cu12 2.18.1.0.4.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: N/A
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-3 0 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
🐛 Describe the bug
WARNING 08-02 16:15:40 config.py:169] gptq quantization is not fully optimized yet. The speed can be slower than non-quantized models.
INFO 08-02 16:15:40 llm_engine.py:98] Initializing an LLM engine (v0.4.1) with config: model='Qwen-1_8B-Chat-Int4', speculative_config=None, tokenizer='Qwen-1_8B-Chat-Int4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.float16, max_seq_len=8192, download_dir=None, load_format=auto, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=gptq, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0)
WARNING 08-02 16:15:40 tokenizer.py:123] Using a slow tokenizer. This might cause a significant slowdown. Consider using a fast tokenizer instead.
INFO 08-02 16:15:40 utils.py:608] Found nccl from library /home/lics/.config/vllm/nccl/cu12/libnccl.so.2.18.1
INFO 08-02 16:15:40 selector.py:65] Cannot use FlashAttention backend for Volta and Turing GPUs.
INFO 08-02 16:15:40 selector.py:33] Using XFormers backend.
INFO 08-02 16:15:43 model_runner.py:173] Loading model weights took 1.7517 GB
Traceback (most recent call last):
File "/home/code/cccc2.py", line 3, in
model = vLLMWrapper('Qwen-1_8B-Chat-Int4', tensor_parallel_size=1 , dtype='float16')
File "/home/code/qwen/examples/vllm_wrapper.py", line 135, in init
self.model = LLM(model=model_dir,
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/entrypoints/llm.py", line 118, in init
self.llm_engine = LLMEngine.from_engine_args(
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 277, in from_engine_args
engine = cls(
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 160, in init
self._initialize_kv_caches()
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/engine/llm_engine.py", line 236, in _initialize_kv_caches
self.model_executor.determine_num_available_blocks())
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/executor/gpu_executor.py", line 111, in determine_num_available_blocks
return self.driver_worker.determine_num_available_blocks()
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/worker/worker.py", line 138, in determine_num_available_blocks
self.model_runner.profile_run()
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/worker/model_runner.py", line 927, in profile_run
self.execute_model(seqs, kv_caches)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/worker/model_runner.py", line 848, in execute_model
hidden_states = model_executable(**execute_model_kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/model_executor/models/qwen.py", line 237, in forward
hidden_states = self.transformer(input_ids, positions, kv_caches,
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/model_executor/models/qwen.py", line 204, in forward
hidden_states, residual = layer(
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/model_executor/models/qwen.py", line 159, in forward
hidden_states = self.attn(
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/model_executor/models/qwen.py", line 112, in forward
qkv, _ = self.c_attn(hidden_states)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1520, in call_impl
return forward_call(*args, **kwargs)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/model_executor/layers/linear.py", line 242, in forward
output_parallel = self.linear_method.apply_weights(self, input, bias)
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/model_executor/layers/quantization/gptq.py", line 215, in apply_weights
output = ops.gptq_gemm(reshaped_x, layer.qweight, layer.qzeros,
File "/home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/_custom_ops.py", line 133, in gptq_gemm
return vllm_ops.gptq_gemm(a, b_q_weight, b_gptq_qzeros, b_gptq_scales,
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x57 (0x73b29fd81d87 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::string const&) + 0x64 (0x73b29fd3275f in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #2: c10::cuda::c10_cuda_check_implementation(int, char const*, char const*, int, bool) + 0x118 (0x73b2a05b28a8 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: + 0x2dfb6 (0x73b2a058dfb6 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #4: + 0x337c0 (0x73b2a05937c0 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #5: + 0x35124 (0x73b2a0595124 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #6: + 0x354d6 (0x73b2a05954d6 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #7: + 0x17a3619 (0x73b2887a3619 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #8: at::detail::empty_generic(c10::ArrayRef, c10::Allocator*, c10::DispatchKeySet, c10::ScalarType, std::optionalc10::MemoryFormat) + 0x14 (0x73b28879d1d4 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #9: at::detail::empty_cuda(c10::ArrayRef, c10::ScalarType, std::optionalc10::Device, std::optionalc10::MemoryFormat) + 0x111 (0x73b2557afc01 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #10: at::detail::empty_cuda(c10::ArrayRef, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0x36 (0x73b2557afed6 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #11: at::native::empty_cuda(c10::ArrayRef, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0x20 (0x73b2558f9b80 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #12: + 0x30467c9 (0x73b2578467c9 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #13: + 0x30468ab (0x73b2578468ab in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cuda.so)
frame #14: at::_ops::empty_memory_format::redispatch(c10::DispatchKeySet, c10::ArrayRefc10::SymInt, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0xe7 (0x73b289768de7 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #15: + 0x2b04faf (0x73b289b04faf in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #16: at::_ops::empty_memory_format::call(c10::ArrayRefc10::SymInt, std::optionalc10::ScalarType, std::optionalc10::Layout, std::optionalc10::Device, std::optional, std::optionalc10::MemoryFormat) + 0x1a0 (0x73b2897aece0 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #17: torch::empty(c10::ArrayRef, c10::TensorOptions, std::optionalc10::MemoryFormat) + 0x20d (0x73b1e16857fd in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/_C.cpython-38-x86_64-linux-gnu.so)
frame #18: gptq_gemm(at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor, bool, int) + 0x1f7 (0x73b1e1681907 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/_C.cpython-38-x86_64-linux-gnu.so)
frame #19: + 0x9c4a2 (0x73b1e169c4a2 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/_C.cpython-38-x86_64-linux-gnu.so)
frame #20: + 0x981b3 (0x73b1e16981b3 in /home/lics/anaconda3/envs/qwen/lib/python3.8/site-packages/vllm/_C.cpython-38-x86_64-linux-gnu.so)
frame #21: PyCFunction_Call + 0x56 (0x581f48990636 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #22: _PyObject_MakeTpCall + 0x22f (0x581f4894d04f in /home/lics/anaconda3/envs/qwen/bin/python)
frame #23: _PyEval_EvalFrameDefault + 0x4679 (0x581f489d8589 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #24: _PyFunction_Vectorcall + 0x10b (0x581f4899c6eb in /home/lics/anaconda3/envs/qwen/bin/python)
frame #25: + 0xfbd8e (0x581f48915d8e in /home/lics/anaconda3/envs/qwen/bin/python)
frame #26: _PyEval_EvalCodeWithName + 0x2d2 (0x581f4899b7c2 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #27: _PyFunction_Vectorcall + 0x1e3 (0x581f4899c7c3 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #28: + 0xfaf50 (0x581f48914f50 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #29: _PyFunction_Vectorcall + 0x10b (0x581f4899c6eb in /home/lics/anaconda3/envs/qwen/bin/python)
frame #30: + 0x182ce9 (0x581f4899cce9 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #31: PyVectorcall_Call + 0x71 (0x581f4894c831 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #32: _PyEval_EvalFrameDefault + 0x2154 (0x581f489d6064 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #33: _PyEval_EvalCodeWithName + 0x2d2 (0x581f4899b7c2 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #34: _PyFunction_Vectorcall + 0x1e3 (0x581f4899c7c3 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #35: + 0x182ce9 (0x581f4899cce9 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #36: PyVectorcall_Call + 0x71 (0x581f4894c831 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #37: _PyEval_EvalFrameDefault + 0x2154 (0x581f489d6064 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #38: _PyEval_EvalCodeWithName + 0x2d2 (0x581f4899b7c2 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #39: _PyObject_FastCallDict + 0x20c (0x581f4899d30c in /home/lics/anaconda3/envs/qwen/bin/python)
frame #40: _PyObject_Call_Prepend + 0x63 (0x581f4899d5b3 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #41: + 0x1836ba (0x581f4899d6ba in /home/lics/anaconda3/envs/qwen/bin/python)
frame #42: _PyObject_MakeTpCall + 0x22f (0x581f4894d04f in /home/lics/anaconda3/envs/qwen/bin/python)
frame #43: _PyEval_EvalFrameDefault + 0x4679 (0x581f489d8589 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #44: _PyEval_EvalCodeWithName + 0x2d2 (0x581f4899b7c2 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #45: _PyFunction_Vectorcall + 0x1e3 (0x581f4899c7c3 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #46: + 0x182ce9 (0x581f4899cce9 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #47: PyVectorcall_Call + 0x71 (0x581f4894c831 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #48: _PyEval_EvalFrameDefault + 0x2154 (0x581f489d6064 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #49: _PyEval_EvalCodeWithName + 0x2d2 (0x581f4899b7c2 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #50: _PyFunction_Vectorcall + 0x1e3 (0x581f4899c7c3 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #51: + 0x182ce9 (0x581f4899cce9 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #52: PyVectorcall_Call + 0x71 (0x581f4894c831 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #53: _PyEval_EvalFrameDefault + 0x2154 (0x581f489d6064 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #54: _PyEval_EvalCodeWithName + 0x2d2 (0x581f4899b7c2 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #55: _PyFunction_Vectorcall + 0x1e3 (0x581f4899c7c3 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #56: _PyObject_FastCallDict + 0x24b (0x581f4899d34b in /home/lics/anaconda3/envs/qwen/bin/python)
frame #57: _PyObject_Call_Prepend + 0x63 (0x581f4899d5b3 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #58: + 0x1836ba (0x581f4899d6ba in /home/lics/anaconda3/envs/qwen/bin/python)
frame #59: _PyObject_MakeTpCall + 0x22f (0x581f4894d04f in /home/lics/anaconda3/envs/qwen/bin/python)
frame #60: _PyEval_EvalFrameDefault + 0x11e5 (0x581f489d50f5 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #61: _PyFunction_Vectorcall + 0x10b (0x581f4899c6eb in /home/lics/anaconda3/envs/qwen/bin/python)
frame #62: + 0x182ce9 (0x581f4899cce9 in /home/lics/anaconda3/envs/qwen/bin/python)
frame #63: PyVectorcall_Call + 0x71 (0x581f4894c831 in /home/lics/anaconda3/envs/qwen/bin/python)
The text was updated successfully, but these errors were encountered: