This is a project to build docker containers for Network Optix Nx Witness VMS, and Network Optix Nx Meta VMS, the developer test and preview version of Nx Witness, and Digital Watchdog DW Spectrum IPVMS, the US licensed and OEM branded version of Nx Witness.
Licensed under the MIT License.
Code and Pipeline is on GitHub:
- Publishing of
stable
tags for versions older than5.0
are disabled, see this issue for details.
Docker container images are published on Docker Hub.
Images are tagged using latest
, stable
, and the specific version number.
The latest
tag uses the latest release version, latest
may be the same as rc
or beta
.
The stable
tag uses the stable release version, stable
may be the same as latest
.
The develop
, rc
or beta
tags are assigned to test or pre-release versions, they are not generated with every release, use with care and only as needed.
E.g.
# Latest NxMeta-LSIO
docker pull docker.io/ptr727/nxmeta-lsio:latest
# Stable DWSpectrum
docker pull docker.io/ptr727/dwspectrum:stable
# 5.0.0.35136 NxWitness-LSIO
docker pull docker.io/ptr727/nxwitness-lsio:5.0.0.35136
The images are updated weekly, picking up the latest upstream OS updates, and newly released product versions.
See the Build Process section for more details on how versions and builds are managed.
I ran DW Spectrum in my home lab on an Ubuntu Virtual Machine, and was looking for a way to run it in Docker. Nx Witness provided no support for Docker, but I did find the The Home Repot NxWitness project, that inspired me to create this project.
I started with individual repositories for Nx Witness, Nx Meta, and DW Spectrum, but that soon became cumbersome with lots of duplication, and I combined all product flavors into this one project.
As of recent Network Optix does provide Experimental Docker Support, and they publish a reference docker project, but they do not publish container images.
The biggest outstanding challenges with running in docker are hardware bound licensing and lack of admin defined storage locations, see the Network Optix and Docker section for details.
The project supports three product variants:
- Network Optix Nx Witness VMS.
- Network Optix Nx Meta VMS, the developer test and preview version of Nx Witness.
- Digital Watchdog DW Spectrum IPVMS, the US licensed and OEM branded version of Nx Witness.
The project creates two variants of each product using different base images:
- Ubuntu using ubuntu:focal base image.
- LinuxServer using lsiobase/ubuntu:focal base image.
Note that smaller base images, like Alpine, and the current Ubuntu 22.04 LTS (Jammy Jellyfish) are not supported by the mediaserver.
The LinuxServer (LSIO) base images provide valuable functionality:
- The LSIO images are based on s6-overlay, and LSIO produces containers for many popular open source applications.
- LSIO allows us to specify the user account to use when running the container mediaserver process.
- This is desired if we do not want to run as root, or required if we need user specific permissions when accessing mapped volumes.
- We could achieve a similar outcome by using Docker's
--user
option, but the mediaserver'sroot-tool
(used for license enforcement) requires running asroot
, thus the container must still be executed withroot
privileges, and we cannot use the--user
option. - The non-LSIO images do run the mediaserver as a non-root user, granting
sudo
rights to run theroot-tool
asroot
, but the user account${COMPANY_NAME}
does not readily map to a user on the host system.
The docker configuration is simple, requiring just two volume mappings for configuration files and media storage.
/config
: Configuration files.
/media
: Recording files.
/archive
: Backup files. (Optional)
Note that if your storage is not showing up, see the Missing Storage section for help.
7001
: Default server port.
PUID
: User Id (LSIO only, see docs for usage).
PGID
: Group Id (LSIO only).
TZ
: Timezone, e.g. Americas/Los_Angeles
.
Any network mode can be used, but due to the hardware bound licensing, host
mode is recommended.
docker create \
--name=nxwitness-lsio-test-container \
--hostname=nxwitness-lsio-test-host \
--domainname=foo.bar.net \
--restart=unless-stopped \
--network=host \
--env TZ=Americas/Los_Angeles \
--volume /mnt/nxwitness/config:/config:rw \
--volume /mnt/nxwitness/media:/media:rw \
docker.io/ptr727/nxwitness-lsio:stable
docker start nxwitness-lsio-test-container
version: "3.7"
services:
nxwitness:
image: docker.io/ptr727/nxwitness-lsio:stable
container_name: nxwitness-lsio-test-container
restart: unless-stopped
network_mode: host
environment:
- TZ=Americas/Los_Angeles
volumes:
- /mnt/nxwitness/config:/config
- /mnt/nxwitness/media:/media
The LSIO images re-link internal paths, while the non-LSIO images needs to map volumes directly to the installed folders.
version: "3.7"
services:
nxwitness:
image: docker.io/ptr727/nxwitness:stable
container_name: nxwitness-test-container
restart: unless-stopped
network_mode: host
volumes:
- /mnt/nxwitness/config/etc:/opt/networkoptix/mediaserver/etc
- /mnt/nxwitness/media:/opt/networkoptix/mediaserver/var/
- Add the template URL
https://github.com/ptr727/NxWitness/tree/master/Unraid
to the "Template Repositories" section, at the bottom of the "Docker" configuration tab, and click "Save". - Create a new container by clicking the "Add Container" button, select the desired product template from the dropdown.
- If using Unassigned Devices for media storage, use
RW/Slave
access mode. - Use
nobody
andusers
identifiers,PUID=99
andPGID=100
. - Register the Unraid filesystems in the
additionalLocalFsTypes
advanced settings, see the Missing Storage section for help.
- Nx Witness:
- Nx Meta:
- DW Spectrum:
- Advanced
mediaserver.conf
Configuration:- v4:
https://[hostname]:7001/static/index.html#/developers/serverDocumentation
- v5: JSON:
https://[hostname]:7001/api/settingsDocumentation
- v4:
- Advanced Web Configuration:
- v4:
https://[hostname]:7001/static/index.html#/advanced
- v5:
https://[hostname]:7001/#/settings/advanced
- Get State: JSON:
https://[hostname]:7001/api/systemSettings
- v4:
- Storage Reporting:
- v4:
https://[hostname]:7001/static/health.html#/health/storages
- v5:
https://[hostname]:7001/#/health/storages
- v4:
The build is divided into the following parts:
- A Makefile is used to create the
Dockerfile
's for permutations of "Entrypoint" and "LSIO" variants, and for each of "NxMeta", "NxWitness" and "DWSpectrum" products.- There is similarity between the container variants, and to avoid code duplication the
Dockerfile
is dynamically constructed using file snippets. - Docker does not support a native
include
directive, instead the M4 macro processor is used to assemble the snippets. - The various docker project directories are created by running
make create
. - The project directories could be created at build time, but they are currently created and checked into source control to simplify change review.
- There is similarity between the container variants, and to avoid code duplication the
- The
Dockerfile
downloads and installs the mediaserver installer at build time using theDOWNLOAD_URL
environment variable.- The Nx download URL can be a DEB file or a ZIP file containing a DEB file, and the DEB file in the ZIP file may not be the same name as the ZIP file.
- The Download.sh script handles the variances making the DEB file available to install.
- It is possible to download the DEB file outside the
Dockerfile
andCOPY
it into the image, but the current process downloads inside theDockerfile
to minimize external build dependencies.
- Updating the available product versions and download URL's are done using the custom CreateMatrix utility app.
- The Version.json information is updated using the mediaserver release API, using the same logic as in the Nx Open desktop client.
CreateMatrix version --version=./Make/Version.json
- The Matrix.json is created from the
Version.json
file and optionally updated.CreateMatrix matrix --version=./Make/Version.json --matrix=./Make/Matrix.json --update
- The Version.json information is updated using the mediaserver release API, using the same logic as in the Nx Open desktop client.
- Local builds can be performed using
make build
, where download URL and version information defaults to theDockerfile
values.- All images will be built and launched using
make build
andmake up
, allowing local testing using the build output URL's. - After testing stop and delete containers and images using
make clean
.
- All images will be built and launched using
- Automated builds are done using GitHub Actions and the BuildPublishPipeline.yml pipeline.
- The pipeline runs the
CreateMatrix
utility to create aMatrix.json
file containing all the container image details. - A Matrix strategy is used to build and publish a container image for every entry in the
Matrix.json
file. - Conditional build time branch logic controls image creation vs. image publishing.
- The pipeline runs the
- Updating the mediaserver inside docker is not supported, to update the server version pull a new container image, it is "the docker way".
There are issues ranging from annoyances to serious with Network Optix on Docker, but compared to other VMS/NVR software I've paid for and used, it is very light on system resources, has a good feature set, and with added docker support runs great in my home lab.
Issue:
The camera recording license keys are activated and bound to hardware attributes of the host server.
Docker containers are supposed to be portable, and moving containers between hosts will break license activation.
Possible Solution:
A portable approach could apply licenses to the Cloud Account, allowing runtime enforcement that is not hardware bound.
Issue:
The mediaserver attempts to automatically decide what storage to use.
Filesystem types are filtered out if not on the supported list, e.g. popular and common ZFS is not supported.
Duplicate filesystems are ignored, e.g. multiple logical mounts on the same physical storage are ignored.
The server blindly creates database files on any writable storage it discovers, regardless of if that storage was assigned for use or not.
Possible Solution:
Remove the elaborate and prone to failure filesystem discovery and filtering logic, use the specified storage, and only the specified storage.
Issue:
The mediaserver binds to any discovered network adapter.
On docker this means the server binds to all docker networks of all running containers, there could be hundreds or thousands, making the network graph useless, and consuming unnecessary resources.
Possible Solution:
Remove the auto-bind functionality, or make it configurable with the default disabled, and allow the administrator to define the specific networks to bind with.
Issue:
This section is personal opinion, I've worked in the ISV industry for many years, and I've taken perpetually licensed products to SaaS.
Living in the US, I have to buy my licenses from Digital Watchdog, and in my experience their license enforcement policy is inflexible, three activations and you have to buy a new license.
That really means that the Lifetime Upgrades and No Annual Agreements license is the lifetime of the hardware on which the license was activated. So let's say hardware is replaced every two years, three activations, lifetime is about six years, not much of a lifetime compared to my lifetime.
There is no such thing as free of cost software, at minimum somebody pays for time, and at minimum vulnerabilities must be fixed, the EULA does not excuse an ISV from willful neglect.
Add in ongoing costs of cloud hosting, cost of development of new features, and providing support, where does the money come from?
Will we eventually see a license scheme change, or is it a customer acquisition and sell or go public play, but hopefully not a cash out and bail scheme?
Possible Solution:
I'd be happy to pay a reasonable yearly subscription or maintenance fee, knowing I get ongoing fixes, features, and support, and my licenses being tied to my cloud account.
My wishlist for better docker support:
- Publish always up to date and ready to use docker images on Docker Hub.
- Do not bind the license to hardware, use the cloud account for license enforcement.
- Do not filter storage filesystems, allow the administrator to specify and use any storage location backed by any filesystem.
- Do not pollute the filesystem by creating folders in any detected storage, use only storage as specified.
- Do not bind to any discovered network adapter, allow the administrator to specify the bound network adapter, or add an option to opt-out/opt-in to auto-binding.
- Implement a more useful recording archive management system, allowing for separate high speed recording, and high capacity playback storage volumes. E.g. as implemented by Milestone XProtect VMS.
Please do contact Network Optix and ask for better docker support.
I am not affiliated with Network Optix, I cannot provide support for their products, please contact Network Optix Support for product support issues.
If there are issues with the docker build scripts used in this project, please create a GitHub Issue.
Note that I only test and run nxmeta-lsio:stable
in my home lab, other images get very little to no testing, please test accordingly.
- v4 does not support Windows Subsystem for Linux v2 (WSL2).
- The DEB installer
postinst
step tries to start the service, and fails the install.Detected runtime type: wsl.
System has not been booted with systemd as init system (PID 1). Can't operate.
- v4 logic tests for
if [[ $RUNTIME != "docker" ]]
, while the runtime reported by WSL2 iswsl
notdocker
. - v5 logic tests for
if [[ -f "/.dockerenv" ]]
, the presence of a Docker environment, that is more portable, and does work in WSL2.
- The DEB installer
- Downgrading from v5 to v4 is not supported.
- The mediaserver will fail to start.
ERROR ec2::detail::QnDbManager(...): DB Error at ec2::ErrorCode ec2::detail::QnDbManager::doQueryNoLock(...): No query Unable to fetch row
- Make a copy, or ZFS snapshot, of the server configuration before upgrading, and restore the old configuration when downgrading.
- The mediaserver will fail to start.
The following section will help troubleshoot common problems with missing storage.
If this does not help, please contact Network Optix Support.
Please do not open a GitHub issue unless you are positive the issue is with the Dockerfile
.
Note that the configuration URL's changed between v4 and v5, see the Advanced Configuration section for version specific URL's.
Confirm that all the mounted volumes are listed in the available storage locations in the web admin portal.
Enable debug logging in the mediaserver:
Edit /config/etc/mediaserver.conf
, set logLevel=verbose
, restart the server.
Look for clues in /config/var/log/log_file.log
.
E.g.
VERBOSE nx::vms::server::fs: shfs /media fuse.shfs - duplicate
VERBOSE nx::vms::server::fs: /dev/sdb8 /media btrfs - duplicate
DEBUG QnStorageSpaceRestHandler(0x7f85043b0b00): Return 0 storages and 1 protocols
Get a list of the mapped volume mounts in the running container, and verify that /config
and /media
is in the JSON Mounts
section:
docker ps --no-trunc
docker container inspect [containername]
Launch a shell in the running container and get a list of filesystems mounts:
docker ps --no-trunc
docker exec --interactive --tty [containername] /bin/bash
cat /proc/mounts
exit
Example output for ZFS:
ssdpool/appdata /config zfs rw,noatime,xattr,posixacl 0 0
nvrpool/nvr /media zfs rw,noatime,xattr,posixacl 0 0
ssdpool/docker /archive zfs rw,noatime,xattr,posixacl 0 0
Mount /config
is on device ssdpool/appdata
and filesystem is zfs
.
Mount /media
is on device nvrpool/nvr
and filesystem is zfs
.
Mount /archive
is on device ssdpool/docker
and filesystem is zfs
.
In this case the devices are unique and will not be filtered, but zfs
is not supported and needs to be registered.
Example output for UnRaid FUSE:
shfs /config fuse.shfs rw,nosuid,nodev,noatime,user_id=0,group_id=0,allow_other 0 0
shfs /media fuse.shfs rw,nosuid,nodev,noatime,user_id=0,group_id=0,allow_other 0 0
shfs /archive fuse.shfs rw,nosuid,nodev,noatime,user_id=0,group_id=0,allow_other 0 0
In this case there are two issues, the device is /shfs
for all three mounts and will be filtered, and the filesystem type is fuse.shfs
that is not supported and needs to be registered.
Log file output for Unraid FUSE:
VERBOSE nx::vms::server::fs: shfs /config fuse.shfs - added
VERBOSE nx::vms::server::fs: shfs /media fuse.shfs - added
VERBOSE nx::vms::server::fs: shfs /archive fuse.shfs - duplicate
The /archive
mount is classified as a duplicate and ignored, map just /media
, do not map /archive
.
Alternative use the "Unassigned Devices" plugin and dedicate e.g. a XFS formatted SSD drive to /media
and/or /config
.
Example output for Unraid BTRFS:
/dev/sdb8 /test btrfs rw,relatime,space_cache,subvolid=5,subvol=/test 0 0
/dev/sdb8 /config btrfs rw,relatime,space_cache,subvolid=5,subvol=/config 0 0
/dev/sdb8 /media btrfs rw,relatime,space_cache,subvolid=5,subvol=/media 0 0
/dev/sdb8 /archive btrfs rw,relatime,space_cache,subvolid=5,subvol=/archive 0 0
VERBOSE nx::vms::server::fs: /dev/sdb8 /test btrfs - added
VERBOSE nx::vms::server::fs: /dev/sdb8 /config btrfs - duplicate
VERBOSE nx::vms::server::fs: /dev/sdb8 /media btrfs - duplicate
VERBOSE nx::vms::server::fs: /dev/sdb8 /archive btrfs - duplicate
In this example the /test
volume was accepted, but all other volumes on /dev/sdb8
was ignored as duplicates.
Add the required filesystem types in the advanced configuration menu.
Edit the additionalLocalFsTypes
option and add the required filesystem types, e.g. fuse.shfs,btrfs,zfs
, restart the server.
Alternatively call the configuration API directly:
wget --no-check-certificate --user=[username] --password=[password] https://[hostname]:7001/api/systemSettings?additionalLocalFsTypes=fuse.shfs,btrfs,zfs
.
To my knowledge there is no solution to duplicate devices being filtered, please contact Network Optix Support and ask them to stop filtering filesystem types and devices.