-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how do you calculate the inference time #5
Comments
As we mentioned in the paper, this time is forward reasoning time. And, my device is 1080 Ti, and the input is a 320*320 RGB image. |
@sjf18 Thank you for your attention. As for the problem that you said that the number of MINet_Res50 parameters is large, I suggest you try the version compressing the channel (https://github.com/lartpang/MINet/blob/master/code/module/MyLightModule.py), which seems to have little impact on performance. And you can use a larger batchsize to improve the performance. |
@lartpang |
i have read your codes, when you are using pytorch, if you want to test your time in cuda, you need a |
@sjf18 I have fixed the code for testing FPS in current commit. Thank you for pointing out the mistake and it will be corrected in the later version of the paper. |
Hi, thanks for your great work, I'm curious how do you calculate your inference time, when I'm running a single image on a Tesla V100 using your minet demo, it's far from reaching 86fps.
And I also calculate your minet-res50's FLOPs and params, 162.38G and 87.06M, they are so big, how can it run so fast in your paper?
The text was updated successfully, but these errors were encountered: