This repo contains Jetson and dGPU Dockerfiles and utility scripts for the Deepstream Services Library (DSL).
Important notes:
-
Base images (Note: you can update the
ARG BASE_IMAGE
value in theDockerfile
to pull a different image). -
The
deepstream-services-library
repo is cloned into/opt/prominenceai/
collocated with/opt/nvidia/
. Note: this is a temporary step. Thelibdsl.so
can/will be pulled from GitHub directly in a future release. -
Additional build steps -- in interactive mode -- are required to build the
libdsl.so
once the container is running. -
CAUTION: this repo is in the early stages of development -- please report issues!
docker_setup.sh
- one time installation of Docker and its system dependencies.docker_run.sh
- builds and runs the container in interactive mode - removes the container on exit.Dockerfile
- Docker file used by the Docker build command
... and many thanks to @gigwegbe and @youngjae-avikus for their contributions!
- Install Docker and Docker Compose
- Set the default Docker runtime
- Add current user to docker group
- Re-login or reboot
- Create a local Docker Registry
- Build the Docker Image
- Build and run the Docker container
- Build the libdsl.so
- Install pyds module
- Generate caffemodel engine files
- Complete Triton Setup
- Commit your file changes
- Deploy the image to the local Docker registry
- Troubleshooting
Important note: NVIDIA requires a specific release of Docker. See the Troubleshooting section if docker commands fail after updating your system with Software Updater.
First, clone the repo and make all scripts executable.
git clone https://github.com/prominenceai/deepstream-services-library-docker ; \
cd ./deepstream-services-library-docker ; \
chmod +x *.sh
Ensure you have curl
installed by entering the following
curl --version
If not, install curl
with the following command
sudo apt install curl
Then, run the one-time setup script to ensure that you have the correct versions of docker
and docker-compose
installed.
./docker_setup.sh
Set the NVIDIA runtime as a default runtime in Docker. Update your /etc/docker/daemon.json
file to read as follows.
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
Add a current user to the docker group to use docker commands without sudo. You can refer to this guide: https://docs.docker.com/install/linux/linux-postinstall/. for more information.
sudo usermod -aG docker $USER ; newgrp docker
Your group membership needs to be re-evaluated. Either logout and log back in or reboot your device.
Enter the following command to create a local Docker registry - one-time setup.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Navigagte to the deepstream-services-library-docker
folder and build the Docker image with the following command. Make sure to add the current directory .
as input.
docker build -t dsl:0 .
The Docker run script sets up the environment and runs the container with the below options:
1 | docker run \
2 | -it \
3 | --rm \
4 | --net=host \
5 | --runtime nvidia \
6 | -e DISPLAY=$DISPLAY \
7 | -v /tmp/argus_socket:/tmp/argus_socket \
8 | -v /tmp/.X11-unix/:/tmp/.X11-unix \
9 | -v /tmp/.dsl/:/tmp/.dsl \
10 | -v ${HOME}/Downloads:/output \
11 | -w /opt/prominenceai/deepstream-services-library \
12 | dsl:0
docker run
Docker run command to build and run thedsl:0
image in a container.-it
- run the container in interactive mode.--rm
- remove the container on exit.--net=host
- when a container is created, the container does not have an independent network (docker0) area and uses the host and the network together.--runtime nvidia
- redundant if set in/etc/docker/daemon.json
.-e DISPLAY=$DISPLAY
- sets the display environment variable for the container.-v /tmp/argus_socket:/tmp/argus_socket
- argus tmp folder mapped into container.-v /tmp/.X11-unix/:/tmp/.X11-unix \
- X11 display folder mapped into container.-v /tmp/.dsl/:/tmp/.dsl
- DSL tmp folder, created on DSL installation, mapped into container.-v ${HOME}/Downloads:/output
- Downloads folder mapped into container-w /opt/prominenceai/deepstream-services-library
- working directory, update as desired.dsl:0
- name of the image to run, update as required.
Execute the Docker run script to build and run the container in interactive mode.
./docker_run.sh
Once in interactive mode, copy and execute the following commands.
cd /opt/prominenceai/deepstream-services-library ; \
make -j 4 ; \
make install
Note: the library will be copied to /usr/local/lib
once built.
To test the custom_pph python example (1uri_file_pgie_iou_tiler_osd_custom_pph_window.py), the pyds
module must be installed in advance. Installation is available at the link deepstream_python_apps. You can download and install whl from the release page
For example, if environment is Nvidia Jetson, Ubuntu 18.04, Python 3.6, DeepStream SDK 6.0.1
wget pyds-1.1.1-py3-none-linux_aarch64.whl
pip3 install pyds-1.1.1-py3-none-linux_aarch64.whl
Note: For previous DeepStream versions, refer to the previous release
Enable DSL logging if you wish to monitor the process (optional).
export GST_DEBUG=1,DSL:4
execute the python script in the /opt/prominenceai/deepstream-services-library
root folder.
python3 make_caffemodel_engine_files.py
Note: this script can take several minutes to run.
The following files are generated (Jetson Nano versions by default)
/opt/nvidia/deepstream/deepstream/samples/models/Primary_Detector_Nano/resnet10.caffemodel_b8_gpu0_fp16.engine
/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarColor/resnet18.caffemodel_b8_gpu0_fp16.engine
/opt/nvidia/deepstream/deepstream/samples/models/Secondary_CarMake/resnet18.caffemodel_b8_gpu0_fp16.engine
/opt/nvidia/deepstream/deepstream/samples/models/Secondary_VehicleTypesresnet18.caffemodel_b8_gpu0_fp16.engine
Update the Primary detector path specification in the script to generate files for other devices.
To complete the Triton Server backend and model repository setup, navigate to the DeepStream samples root folder and execute the setup scripts with the following command.
cd /opt/nvidia/deepstream/deepstream/samples ; \
./triton_backend_setup.sh ; \
./prepare_ds_triton_model_repo.sh
Note: execution of the repo setup script can take serveral minutes.
Caution the docker_run.sh
script includes the -rm
flag in the run command to remove the container on exit. All changes you've made in the running container will be lost.
Use the docker ps
command to list the running containers.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da912760ce82 dsl:0 "/bin/bash" 42 minutes ago Up 42 minutes festive_brattain
26287d283d32 registry:2 "/entrypoint.sh /etc…" 3 hours ago Up 3 hours 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp registry
605a54fa586d mcr.microsoft.com/azureiotedge-agent:1.0 "/bin/sh -c 'exec /a…" 4 hours ago Up 3 hours edgeAgent
Then commit the container by ID, using the image name.
docker commit da912760ce82 localhost:5000/dsl:latest
you can now safely # exit
from interactive mode with all changes persisted.
Update your docker_run.sh
script with the new localhost:5000/dsl:latest
image name.
Use the following command to push the new image to the registry for deployment.
docker push localhost:5000/dsl:latest
docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380:
starting container process caused: error adding seccomp filter rule for syscall clone3: permission denied: unknown.
NVIDIA requires a specific release of Docker - see dusty-nv/jetson-containers#108
Solution, reinstall the correct version with the following commands.
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
&& curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
&& curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install nvidia-docker2=2.8.0-1