Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mobilenet-ssd的量化模型比float模型耗时更多? #552

Closed
renfeier opened this issue Aug 30, 2018 · 7 comments
Closed

mobilenet-ssd的量化模型比float模型耗时更多? #552

renfeier opened this issue Aug 30, 2018 · 7 comments

Comments

@renfeier
Copy link

在RK3399上测试mobilenet-ssd的量化模型竟然比float模型耗时更多,而且差这么多,好奇这是什么原因。
mobilenet-ssd的量化模型param我是在mobilenet-ssd.param中的Conv层参数后面加8=1而来
image

@renfeier
Copy link
Author

@nihui

@nihui
Copy link
Member

nihui commented Aug 30, 2018

int8 计算 aarch64 还没有优化,所以很慢 qaq

@leeburt
Copy link

leeburt commented Sep 17, 2018

@renfeier你好,我没有找到mobilenet-ssd的caffe模型量化的方法,请问哪里有相关的资料可以查阅,以及如何在ncnn中开启?

@fangbaolei
Copy link

@nihui 加载FP16量化的模型,和原模型对比PC提速接近一倍, RK3399没变化,也是没有做对应优化的原因吗

@BUG1989
Copy link
Contributor

BUG1989 commented Mar 15, 2019

@renfeier @leeburt @fangbaolei
arm64-v8a的量化已经支持了,请更新代码 : )

@qiudi0127
Copy link

@nihui 您好,请问更新了最新版的ncnn后,相同的模型耗时为什么增加了几倍?多谢

@nihui
Copy link
Member

nihui commented May 8, 2021

https://zhuanlan.zhihu.com/p/370689914
20210507版本大幅优化int8推理速度,建议更新

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants