-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mobilenet-ssd的量化模型比float模型耗时更多? #552
Labels
Comments
int8 计算 aarch64 还没有优化,所以很慢 qaq |
@renfeier你好,我没有找到mobilenet-ssd的caffe模型量化的方法,请问哪里有相关的资料可以查阅,以及如何在ncnn中开启? |
@nihui 加载FP16量化的模型,和原模型对比PC提速接近一倍, RK3399没变化,也是没有做对应优化的原因吗 |
@renfeier @leeburt @fangbaolei |
@nihui 您好,请问更新了最新版的ncnn后,相同的模型耗时为什么增加了几倍?多谢 |
https://zhuanlan.zhihu.com/p/370689914 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
在RK3399上测试mobilenet-ssd的量化模型竟然比float模型耗时更多,而且差这么多,好奇这是什么原因。
![image](https://user-images.githubusercontent.com/13659613/44828122-f9995b00-ac48-11e8-9a8e-c8479d77d5dd.png)
mobilenet-ssd的量化模型param我是在mobilenet-ssd.param中的Conv层参数后面加8=1而来
The text was updated successfully, but these errors were encountered: