Dockerfiles
- CPU Dockerfile, Instructions
- CUDA + CUDNN: Dockerfile, Instructions
- nGraph: Dockerfile, Instructions
- TensorRT: Dockerfile, Instructions
- OpenVINO: Dockerfile, Instructions
- Nuphar: Dockerfile, Instructions
- ARM 32v7: Dockerfile, Instructions
- NVIDIA Jetson TX1/TX2/Nano/Xavier: Dockerfile, Instructions
- ONNX-Ecosystem (CPU + Converters): Dockerfile, Instructions
- ONNX Runtime Server: Dockerfile, Instructions
- MIGraphX: Dockerfile, Instructions
Published Microsoft Container Registry (MCR) Images
Use docker pull
with any of the images and tags below to pull an image and try for yourself. Note that the CPU, CUDA, and TensorRT images include additional dependencies like miniconda for compatibility with AzureML image deployment.
Example: Run docker pull mcr.microsoft.com/azureml/onnxruntime:latest-cuda
to pull the latest released docker image with ONNX Runtime GPU, CUDA, and CUDNN support.
Build Flavor | Base Image | ONNX Runtime Docker Image tags | Latest |
---|---|---|---|
Source (CPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0, :v0.5.0, v0.5.1, :v1.0.0, :v1.2.0, :v1.3.0, :v1.4.0 | :latest |
CUDA (GPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-cuda10.0-cudnn7, :v0.5.0-cuda10.1-cudnn7, :v0.5.1-cuda10.1-cudnn7, :v1.0.0-cuda10.1-cudnn7, :v1.2.0-cuda10.1-cudnn7, :v1.3.0-cuda10.1-cudnn7, :v1.4.0-cuda10.1-cudnn7 | :latest-cuda |
TensorRT (x86) | mcr.microsoft.com/azureml/onnxruntime | :v0.4.0-tensorrt19.03, :v0.5.0-tensorrt19.06, :v1.0.0-tensorrt19.09, :v1.2.0-tensorrt20.01, :v1.3.0-tensorrt20.01, :v1.4.0-tensorrt20.01 | :latest-tensorrt |
OpenVino (VAD-M) | mcr.microsoft.com/azureml/onnxruntime | :v0.5.0-openvino-r1.1-vadm, :v1.0.0-openvino-r1.1-vadm, :v1.4.0-openvino-2020.3.194-vadm | :latest-openvino-vadm |
OpenVino (MYRIAD) | mcr.microsoft.com/azureml/onnxruntime | :v0.5.0-openvino-r1.1-myriad, :v1.0.0-openvino-r1.1-myriad, :v1.3.0-openvino-2020.2.120-myriad, :v1.4.0-openvino-2020.3.194-myriad | :latest-openvino-myriad |
OpenVino (CPU) | mcr.microsoft.com/azureml/onnxruntime | :v1.0.0-openvino-r1.1-cpu, :v1.3.0-openvino-2020.2.120-cpu, :v1.4.0-openvino-2020.3.194-cpu | :latest-openvino-cpu |
OpenVINO (GPU) | mcr.microsoft.com/azureml/onnxruntime | :v1.3.0-openvino-2020.2.120-gpu, :v1.4.0-openvino-2020.3.194-gpu | :latest-openvino-gpu |
nGraph | mcr.microsoft.com/azureml/onnxruntime | :v1.0.0-ngraph-v0.26.0 | :latest-ngraph |
Nuphar | mcr.microsoft.com/azureml/onnxruntime | :latest-nuphar | |
Server | mcr.microsoft.com/onnxruntime/server | :v0.4.0, :v0.5.0, :v0.5.1, :v1.0.0 | :latest |
MIGraphX (GPU) | mcr.microsoft.com/azureml/onnxruntime | :v0.6 | :latest |
Training (usage) | mcr.microsoft.com/azureml/onnxruntime-training | :0.1-rc1-openmpi4.0-cuda10.1-cudnn7.6-nccl2.4.8 | 0.1-rc1-openmpi4.0-cuda10.1-cudnn7.6-nccl2.4.8 |
Ubuntu 16.04, CPU, Python Bindings
- Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-source -f Dockerfile.source .
- Run the Docker image
docker run -it onnxruntime-source
Ubuntu 16.04, CUDA 10.0, CuDNN 7
- Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-cuda -f Dockerfile.cuda .
- Run the Docker image
docker run --gpus all -it onnxruntime-cuda
or
nvidia-docker run -it onnxruntime-cuda
Public Preview
Deprecation Begins | June 1, 2020 |
Removal Date | December 1, 2020 |
Starting with the OpenVINO™ toolkit 2020.2 release, all of the features previously available through nGraph have been merged into the OpenVINO™ toolkit. As a result, all the features previously available through ONNX RT Execution Provider for nGraph have been merged with ONNX RT Execution Provider for OpenVINO™ toolkit.
Therefore, ONNX RT Execution Provider for nGraph will be deprecated starting June 1, 2020 and will be completely removed on December 1, 2020. Users are recommended to migrate to the ONNX RT Execution Provider for OpenVINO™ toolkit as the unified solution for all AI inferencing on Intel® hardware.
Ubuntu 16.04, Python Bindings
- Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-ngraph -f Dockerfile.ngraph .
- Run the Docker image
docker run -it onnxruntime-ngraph
Ubuntu 18.04, CUDA 11.0, TensorRT 7.1.3.4
- Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-trt -f Dockerfile.tensorrt .
- Run the Docker image
docker run -it onnxruntime-trt
Public Preview
Ubuntu 18.04, Python Bindings
-
Build the onnxruntime image for one of the accelerators supported below.
Retrieve your docker image in one of the following ways.
- Choose Dockerfile.openvino as the dockerfile for building an OpenVINO 2020.4 based Docker image. Providing the docker build argument DEVICE enables the onnxruntime build for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default repository is http://github.com/microsoft/onnxruntime and default branch is master.
docker build --rm -t onnxruntime --build-arg DEVICE=$DEVICE -f Dockerfile.openvino .
- Pull the official image from DockerHub.
- Choose Dockerfile.openvino as the dockerfile for building an OpenVINO 2020.4 based Docker image. Providing the docker build argument DEVICE enables the onnxruntime build for that particular device. You can also provide arguments ONNXRUNTIME_REPO and ONNXRUNTIME_BRANCH to test that particular repo and branch. Default repository is http://github.com/microsoft/onnxruntime and default branch is master.
-
DEVICE: Specifies the hardware target for building OpenVINO Execution Provider. Below are the options for different Intel target devices.
Device Option Target Device CPU_FP32
Intel CPUs GPU_FP32
Intel Integrated Graphics GPU_FP16
Intel Integrated Graphics MYRIAD_FP16
Intel MovidiusTM USB sticks VAD-M_FP16
Intel Vision Accelerator Design based on MovidiusTM MyriadX VPUs
This is the hardware accelerator target that is enabled by default in the container image. After building the container image for one default target, the application may explicitly choose a different target at run time with the same container by using the Dynamic device selction API.
-
Build the docker image from the DockerFile in this repository.
docker build --rm -t onnxruntime-cpu --build-arg DEVICE=CPU_FP32 --network host -f Dockerfile.openvino .
-
Run the docker image
docker run -it onnxruntime-cpu
- Build the docker image from the DockerFile in this repository.
docker build --rm -t onnxruntime-gpu --build-arg DEVICE=GPU_FP32 --network host -f Dockerfile.openvino .
- Run the docker image
docker run -it --device /dev/dri:/dev/dri onnxruntime-gpu:latest
-
Build the docker image from the DockerFile in this repository.
docker build --rm -t onnxruntime-myriad --build-arg DEVICE=MYRIAD_FP16 --network host -f Dockerfile.openvino .
-
Install the Myriad rules drivers on the host machine according to the reference in here
-
Run the docker image by mounting the device drivers
docker run -it --network host --privileged -v /dev:/dev onnxruntime-myriad:latest
-
Download OpenVINO Full package for version 2020.3 for Linux on host machine from this link and install it with the help of instructions from this link
-
Install the drivers on the host machine according to the reference in here
-
Build the docker image from the DockerFile in this repository.
docker build --rm -t onnxruntime-vadm --build-arg DEVICE=VAD-M_FP16 --network host -f Dockerfile.openvino .
-
Run hddldaemon on the host in a separate terminal session using the following command:
$HDDL_INSTALL_DIR/bin/hddldaemon
-
Run the docker image by mounting the device drivers
docker run -it --device --mount type=bind,source=/var/tmp,destination=/var/tmp --device /dev/ion:/dev/ion onnxruntime-vadm:latest
Public Preview
The Dockerfile used in these instructions specifically targets Raspberry Pi 3/3+ running Raspbian Stretch. The same approach should work for other ARM devices, but may require some changes to the Dockerfile such as choosing a different base image (Line 0: FROM ...
).
- Install dependencies:
- DockerCE on your development machine by following the instructions here
- ARM emulator:
sudo apt-get install -y qemu-user-static
-
Create an empty local directory
mkdir onnx-build cd onnx-build
-
Save the Dockerfile from this repo to your new directory: Dockerfile.arm32v7
-
Run docker build
This will build all the dependencies first, then build ONNX Runtime and its Python bindings. This will take several hours.
docker build -t onnxruntime-arm32v7 -f Dockerfile.arm32v7 .
-
Note the full path of the
.whl
file- Reported at the end of the build, after the
# Build Output
line. - It should follow the format
onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
, but version number may have changed. You'll use this path to extract the wheel file later.
- Reported at the end of the build, after the
-
Check that the build succeeded
Upon completion, you should see an image tagged
onnxruntime-arm32v7
in your list of docker images:docker images
-
Extract the Python wheel file from the docker image
(Update the path/version of the
.whl
file with the one noted in step 5)docker create -ti --name onnxruntime_temp onnxruntime-arm32v7 bash docker cp onnxruntime_temp:/code/onnxruntime/build/Linux/MinSizeRel/dist/onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl . docker rm -fv onnxruntime_temp
This will save a copy of the wheel file,
onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
, to your working directory on your host machine. -
Copy the wheel file (
onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
) to your Raspberry Pi or other ARM device -
On device, install the ONNX Runtime wheel file
sudo apt-get update sudo apt-get install -y python3 python3-pip pip3 install numpy # Install ONNX Runtime # Important: Update path/version to match the name and location of your .whl file pip3 install onnxruntime-0.3.0-cp35-cp35m-linux_armv7l.whl
-
Test installation by following the instructions here
These instructions are for JetPack SDK 4.4.
The Dockerfile.jetson is using NVIDIA L4T 32.4.3 as base image.
Versions different from these may require modifications to these instructions.
Instructions assume you are on Jetson host in the root of onnxruntime git project clone(https://github.com/microsoft/onnxruntime
)
Two-step installation is required:
- Build Python 'wheel' for ONNX Runtime on host Jetson system;
- Build Docker image using ONNX Runtime wheel from step 1. You can also install the wheel on the host directly.
Here are the build commands for each step:
1.1 Install ONNX Runtime build dependencies on Jetpack 4.4 host:
sudo apt install -y --no-install-recommends \
build-essential software-properties-common cmake libopenblas-dev \
libpython3.6-dev python3-pip python3-dev
1.2 Build ONNXRuntime Python wheel:
./build.sh --update --config Release --build --build_wheel \
--use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu
Note: You may add --use_tensorrt and --tensorrt_home options if you wish to use NVIDIA TensorRT (support is experimental), as well as any other options supported by build.sh script.
- After the Python wheel is successfully built, use 'find' command for Docker to install the wheel inside new image:
find . -name '*.whl' -print -exec sudo -H DOCKER_BUILDKIT=1 nvidia-docker build --build-arg WHEEL_FILE={} -f ./dockerfiles/Dockerfile.jetson . \;
Note: Resulting Docker image will have ONNX Runtime installed in /usr, and ONNX Runtime wheel copied to /onnxruntime directory. Nothing else from ONNX Runtime source tree will be copied/installed to the image.
Note: When running the container you built in Docker, please either use 'nvidia-docker' command instead of 'docker', or use Docker command-line options to make sure NVIDIA runtime will be used and appropiate files mounted from host. Otherwise, CUDA libraries won't be found. You can also set NVIDIA runtime as default in Docker.
Public Preview
Ubuntu 16.04, Python Bindings
- Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-nuphar -f Dockerfile.nuphar .
- Run the Docker image
docker run -it onnxruntime-nuphar
Ubuntu 16.04, rocm3.3, AMDMIGraphX v0.7
- Build the docker image from the Dockerfile in this repository.
docker build -t onnxruntime-migraphx -f Dockerfile.migraphx .
- Run the Docker image
docker run -it --device=/dev/kfd --device=/dev/dri --group-add video onnxruntime-migraphx
Public Preview
Ubuntu 16.04
- Build the docker image from the Dockerfile in this repository
docker build -t {docker_image_name} -f Dockerfile.server .
- Run the ONNXRuntime server with the image created in step 1
docker run -v {localModelAbsoluteFolder}:{dockerModelAbsoluteFolder} -p {your_local_port}:8001 {imageName} --model_path {dockerModelAbsolutePath}
- Send HTTP requests to the container running ONNX Runtime Server
Send HTTP requests to the docker container through the binding local port. Here is the full usage document.
curl -X POST -d "@request.json" -H "Content-Type: application/json" http://0.0.0.0:{your_local_port}/v1/models/mymodel/versions/3:predict