We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
环境:
compile.sh: 构建成功
mkdir -p build cd build rm -rf * DEMO_NAME=main WITH_MKL=ON WITH_GPU=ON USE_TENSORRT=OFF LIB_DIR=/home/yi/Libs/paddle_inference/ CUDNN_LIB=/usr/lib/x86_64-linux-gnu/ CUDA_LIB=/usr/local/cuda-11.7/lib64/ TENSORRT_ROOT=/home/yi/Libs/TensorRT-8.5.3.1 CUSTOM_OPERATOR_FILES="custom_ops/voxelize_op.cu;custom_ops/voxelize_op.cc;custom_ops/iou3d_nms_kernel.cu;custom_ops/postprocess.cc;custom_ops/postprocess.cu" cmake .. -DPADDLE_LIB=${LIB_DIR} \ -DWITH_MKL=${WITH_MKL} \ -DDEMO_NAME=${DEMO_NAME} \ -DWITH_GPU=${WITH_GPU} \ -DWITH_STATIC_LIB=OFF \ -DUSE_TENSORRT=${USE_TENSORRT} \ -DCUDNN_LIB=${CUDNN_LIB} \ -DCUDA_LIB=${CUDA_LIB} \ -DTENSORRT_ROOT=${TENSORRT_ROOT} \ -DCUSTOM_OPERATOR_FILES=${CUSTOM_OPERATOR_FILES} make -j
Error Message:
yi@ubuntu20:~/apollo-model-centerpoint/deploy/centerpoint/cpp$ ./build/main --model_file /home/yi/Documents/paddle/center_point_paddle/center_point_paddle/centerpoint.pdmodel --params_file /home/yi/Documents/paddle/center_point_paddle/center_point_paddle/centerpoint.pdiparams --lidar_file /media/yi/SHARESSD/ParisLuco/ParisLuco/00/frames/frame_09343.ply --num_point_dim 4 ./build/main: error while loading shared libraries: libpaddle2onnx.so.1.0.0rc2: cannot open shared object file: No such file or directory
抱歉打扰了,我在测试C++部署时遇到了以上问题。不清楚这个静态库是那一步没安装上。
非常感谢这个开源项目,非常感谢您为这个项目花费的时间和精力。
update: 构建时应该是paddle_lib库没有被正确link,感觉main的体积很小,构建时也很快。
The text was updated successfully, but these errors were encountered:
./build/main: error while loading shared libraries: libpaddle2onnx.so.1.0.0rc2: cannot open shared object file: No such file or directory
你可以通过ldd build/main查看你编译的main文件的链接情况,对于libpaddle2onnnx.so.1.0.0rc2应该会显示not found.你需要在你的环境里编译paddle的源码,其会在build目录下生成paddle_inference_install_dir,用该文件夹作为你的链接目录
ldd build/main
libpaddle2onnnx.so.1.0.0rc2
not found
paddle_inference_install_dir
Sorry, something went wrong.
@gaoyi-ai 你好,你用源码编译之后有遇到hard_voxelize这个算子无法注册的问题吗?
我弄明白了,Centerpoint部署依赖的paddle要求gcc版本是gcc-8.2或5.4,当我想在Ubuntu20.04的环境编译时,默认gcc版本是8.4.0,由于一些我不知道的原因编译出来的main文件不能链接到生成的libpd_custom_op.so上,所以会报hard_voxelize没有注册
main
libpd_custom_op.so
hard_voxelize
No branches or pull requests
环境:
GPU 是 MKL 8.2 CUDA11.7/cuDNN8.4/TensorRT8.4 paddle_inference.tgz
compile.sh:
构建成功
Error Message:
抱歉打扰了,我在测试C++部署时遇到了以上问题。不清楚这个静态库是那一步没安装上。
非常感谢这个开源项目,非常感谢您为这个项目花费的时间和精力。
update:
构建时应该是paddle_lib库没有被正确link,感觉main的体积很小,构建时也很快。
The text was updated successfully, but these errors were encountered: