-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement support for onnx model inference on gpu #386
Comments
ermolenkodev
added a commit
to ermolenkodev/KotlinDL
that referenced
this issue
Jul 27, 2022
…otlin#386) * Add method for model initialization with non standard execution providers * Add convenience methods for inference using specific execution providers * Add tests for inference with non-standard execution providers * Add samples for inference with CUDA execution provider
ermolenkodev
added a commit
to ermolenkodev/KotlinDL
that referenced
this issue
Jul 27, 2022
ermolenkodev
added a commit
to ermolenkodev/KotlinDL
that referenced
this issue
Jul 27, 2022
ermolenkodev
added a commit
to ermolenkodev/KotlinDL
that referenced
this issue
Jul 29, 2022
* Add documentation for ExecutionProvider classes. * Remove ExecutionProviders object * Remove unnecessary equals() and hashCode() methods for ExecutionProvider classes * Add option to provide a list of execution providers during model initialization directly. * Fix use() instead of run() used in inferUsing() convinience function
ermolenkodev
added a commit
to ermolenkodev/KotlinDL
that referenced
this issue
Aug 2, 2022
ermolenkodev
added a commit
that referenced
this issue
Aug 2, 2022
) (#409) * Add support for onnx model inference using CUDA execution provider (#386) * Add method for model initialization with non standard execution providers * Add convenience methods for inference using specific execution providers * Add tests for inference with non-standard execution providers * Add samples for inference with CUDA execution provider * Add documentation for onnx model inference using CUDA execution provider (#386) * Refactoring and documentation improvement (#386) * Add documentation for ExecutionProvider classes. * Remove ExecutionProviders object * Remove unnecessary equals() and hashCode() methods for ExecutionProvider classes * Add option to provide a list of execution providers during model initialization directly. * Fix use() instead of run() used in inferUsing() convinience function * Refactor multiPoseCudaInference example (#386)
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Feature request:
Add ability to execute ONNX model on GPU.
The text was updated successfully, but these errors were encountered: