CUDA Python is the home for accessing NVIDIA’s CUDA platform from Python. It consists of multiple components:
- cuda.core: Pythonic access to CUDA Runtime and other core functionalities
- cuda.bindings: Low-level Python bindings to CUDA C APIs
- cuda.cooperative: Pythonic exposure of CUB cooperative algorithms
- cuda.parallel: Pythonic exposure of Thrust parallel algorithms
For access to NVIDIA CPU & GPU Math Libraries, please refer to nvmath-python.
CUDA Python is currently undergoing an overhaul to improve existing and bring up new components. All of the previously available functionalities from the cuda-python package will continue to be available, please refer to the cuda.bindings documentation for installation guide and further detail.
cuda-python
is being re-structured to become a metapackage that contains a collection of subpackages. Each subpackage is versioned independently, allowing installation of each component as needed.
The cuda.core
package offers idiomatic, pythonic access to CUDA Runtime and other functionalities.
The goals are to
- Provide idiomatic ("pythonic") access to CUDA Driver, Runtime, and JIT compiler toolchain
- Focus on developer productivity by ensuring end-to-end CUDA development can be performed quickly and entirely in Python
- Avoid homegrown Python abstractions for CUDA for new Python GPU libraries starting from scratch
- Ease developer burden of maintaining and catching up with latest CUDA features
- Flatten the learning curve for current and future generations of CUDA developers
The cuda.bindings
package is a standard set of low-level interfaces, providing full coverage of and access to the CUDA host APIs from Python.
The list of available interfaces are:
- CUDA Driver
- CUDA Runtime
- NVRTC
- nvJitLink