Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement support for onnx model inference on gpu #386

Closed
ermolenkodev opened this issue May 26, 2022 · 0 comments
Closed

Implement support for onnx model inference on gpu #386

ermolenkodev opened this issue May 26, 2022 · 0 comments
Assignees
Labels
enhancement New feature or request
Milestone

Comments

@ermolenkodev
Copy link
Collaborator

Feature request:
Add ability to execute ONNX model on GPU.

@ermolenkodev ermolenkodev added the enhancement New feature or request label May 26, 2022
@ermolenkodev ermolenkodev self-assigned this Jul 13, 2022
@ermolenkodev ermolenkodev added this to the 0.5 milestone Jul 15, 2022
ermolenkodev added a commit to ermolenkodev/KotlinDL that referenced this issue Jul 27, 2022
…otlin#386)

* Add method for model initialization with non standard execution providers

* Add convenience methods for inference using specific execution providers

* Add tests for inference with non-standard execution providers

* Add samples for inference with CUDA execution provider
ermolenkodev added a commit to ermolenkodev/KotlinDL that referenced this issue Jul 27, 2022
ermolenkodev added a commit to ermolenkodev/KotlinDL that referenced this issue Jul 27, 2022
ermolenkodev added a commit to ermolenkodev/KotlinDL that referenced this issue Jul 29, 2022
* Add documentation for ExecutionProvider classes.

* Remove ExecutionProviders object

* Remove unnecessary equals() and hashCode() methods for ExecutionProvider classes

* Add option to provide a list of execution providers during model initialization directly.

* Fix use() instead of run() used in inferUsing() convinience function
ermolenkodev added a commit to ermolenkodev/KotlinDL that referenced this issue Aug 2, 2022
ermolenkodev added a commit that referenced this issue Aug 2, 2022
) (#409)

* Add support for onnx model inference using CUDA execution provider (#386)

* Add method for model initialization with non standard execution providers

* Add convenience methods for inference using specific execution providers

* Add tests for inference with non-standard execution providers

* Add samples for inference with CUDA execution provider

* Add documentation for onnx model inference using CUDA execution provider (#386)

* Refactoring and documentation improvement (#386)

* Add documentation for ExecutionProvider classes.

* Remove ExecutionProviders object

* Remove unnecessary equals() and hashCode() methods for ExecutionProvider classes

* Add option to provide a list of execution providers during model initialization directly.

* Fix use() instead of run() used in inferUsing() convinience function

* Refactor multiPoseCudaInference example (#386)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant