Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement docker networks. Fixes #1982. #2207

Conversation

FroggyFlox
Copy link
Member

Fixes #1982.
Fixes #2003.
Fixes #2013.
Fixes #2009.

@phillxnet, ready for review.

As detailed in #1982, our current inter-containers communication in multi-containers rock-ons is not functional due to the deprecation of containers links by Docker. As originally proposed by @flukejones, we should switch to using docker networks instead.
This pull-request thus represents a fix for inter-containers communication by following @flukejones idea and recommendation. It also takes the opportunity to expand upon this and proposes a deeper integration of Docker networks into Rockstor.

Note: As this PR includes a substantial amount of features and testing, each main point will be split into separate posts.

Immense thanks to @phillxnet for his critical intervention and restructuration of the corresponding branch in the development of this PR.

Overall aims & logic

The proposed implementation of docker networks (referred to as rocknets within Rockstor) can be split into three main parts:

  1. re-enable containers_links object in rock-ons definitions.
  2. create a rocknet from the webUI
  3. connect a rock-on container to rocknet(s)

In point 1, this PR follows @flukejones' idea and simply uses the existing container_links object in the rock-on json to create a dedicated docker network linking the two containers defined in the container_link. Notably, as these docker networks are defined in the rock-on json, they are deemed critical for proper function of the rock-on and thus not editable by the user. They are still surfaced to the webUI for conveniency, however.

While rocknets (created in points 2 and 3 above) are using the same docker networks as those defined in containers_links, they differ in that they are created by the user. As a result, they are editable by the user and can be used to connect any container(s) the user desires.

Update test_system_network.py

Black formatting

Prevents the use of 'host', 'bridge', and 'null' as connection names.

Update test_system_network.py following implementation of docker networking.

Check if rock-on uses host network and disable network customization if true.

move existing & proposed low level docker functions to system/docker.py
Helps to remove the possibility of circular imports
Move existing docker_status from rockon_helpers to system/docker
Improves separation of concerns between system level docker and Rock-ons
Establish 'sysnet' (system/network) import name space within views

Display rocknet details in Network summary table.
Use docker_name for rocknets in network dashboard widget:
Get containers and corresponding rock-on name into the connection's docker_options.
Get Rocknet(s) for rockons:
- Implement Rocknet Backbone Model: RockOnNetwork and its Collection
- Create API call to fetch data from DContainerNetwork through RockOnNetwork
- Display currently-attached rocknet(s) in rockon summary table

Allow new networks to be specified directly from the field.
Changes to DPort model:
- add publish field
- add container_name property method

Join Networks:
- Switch to sequential fetching of Ports, Containers, and then Networks as the latter need to be stopping.
- Display user networks as multi-select field per containers in rock-on
Unpublish Ports UI: Add backend logic to skip port at docker run time if unpublished
Disable UI button if UI-port is unpublished
Implement UI to Add, Edit, and Delete docker networks.
Disallow connection toggle and edit for docker networks.
Display docker_name in network summary table.
Fix for Container_links: Create and connect each linked container to a given network defined in the Rock-on JSON definition file.
@FroggyFlox
Copy link
Member Author

Container links

See related issue (#2003) for additional details and history.
Briefly, we take advantage of the existing container_links object in rock-on definition files, to define the source and destination containers and store them in our existing DContainerLink storageadmin model. We then use docker network create to create a dedicated docker network, and then link both containers to it (using docker network connect).

Database modifications

As we're keeping the same container_links object in the JSON file, I used its currently established parsing to the DContainerLink storageadmin table, but its current restraints on "unique" are not enough to support networks sharing a destination container. Indeed, a unique destination container + network name is currently required. Expanding this requirement to source container + destination container + network name resolves the issue and allows for one container to connect to multiple containers using the same network.

Logic and implementation

  1. Start dealing with docker networks after creation and start of docker containers, as connecting a container to a network requires the container to exist.
  2. For each container in the Rock-on, check for an existing link definition and proceed to link creation and container connection for each link.
  3. Check if link already exist and create if needed.
  4. Check if destination container is running (or at least not absent) and connect it to the given network if not already in it.
  5. Repeat step 4 for the source container.
  6. Upon Rock-on uninstall loop through all links associated to the container(s) in the Rock-on and remove the corresponding network(s).

This logic has been verified to work with a test rock-on (json file here). This rock-on runs 4 copies of the alpine image, with the following links, repasted below:

  • A sees B
  • C sees D
  • A sees C
	"Alpine_link_test_ABCD_2nets": {
		"container_links": {
            "alpineA": [
                {
                    "name": "alpineA-B",
                    "source_container": "alpineB"
                },
				{
                    "name": "alpineA-C",
                    "source_container": "alpineC"
                }
            ],
			"alpineC": [
				{
                    "name": "alpineC-D",
                    "source_container": "alpineD"
                }
			]
        },

As expected, this rock-on installs successfully and creates three new networks:

rockdev:~ # docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
987efc226622        alpineA-B           bridge              local
1d1719c45e5a        alpineA-C           bridge              local
7a4164d5d66f        alpineC-D           bridge              local
ddc23e495855        bridge              bridge              local
c4d12aa68f4d        host                host                local
3e39b1129036        none                null                local

Each network connects the correct containers:

rockdev:~ # docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' alpineA-C
alpineCalpineA

rockdev:~ # docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' alpineA-B
alpineBalpineA

rockdev:~ # docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' alpineC-D
alpineCalpineD

And each container can ping the correct ones:

rockdev:~ # docker attach alpineA
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
PING alpineA (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.049 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.123 ms

--- alpineA ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.049/0.086/0.123 ms
PING alpineB (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.097 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.142 ms

--- alpineB ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.119/0.142 ms
PING alpineC (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.097 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.136 ms

--- alpineC ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.116/0.136 ms
ping: bad address 'alpineD'
rockdev:~ # docker attach alpineB
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
PING alpineA (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.085 ms

--- alpineA ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.085/0.090/0.096 ms
PING alpineB (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.039 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms

--- alpineB ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.039/0.059/0.080 ms
ping: bad address 'alpineC'
ping: bad address 'alpineD'
rockdev:~ # docker attach alpineC
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
PING alpineA (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.105 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.141 ms

--- alpineA ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.105/0.123/0.141 ms
ping: bad address 'alpineB'
PING alpineC (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.039 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.063 ms

--- alpineC ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.039/0.051/0.063 ms
PING alpineD (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.109 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.160 ms

--- alpineD ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.109/0.134/0.160 ms
rockdev:~ # docker attach alpineD
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
ping: bad address 'alpineA'
ping: bad address 'alpineB'
PING alpineC (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.075 ms
64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.107 ms

--- alpineC ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.075/0.091/0.107 ms
PING alpineD (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.067 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.113 ms

--- alpineD ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.067/0.090/0.113 ms

After Rock-on uninstall, all networks are correctly deleted:

rockdev:~ # docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ddc23e495855        bridge              bridge              local
c4d12aa68f4d        host                host                local
3e39b1129036        none                null                local

Finally, the resulting docker networks created are surfaced to the user in the "System" > "Networks" page:
image

As mentioned above, these docker networks created by a rock-on definition file are deemed critical to the rock-on function, so we do not allow the user to edit or delete them (the respective icons are not displayed), similar to the default docker0 bridge network.

@FroggyFlox
Copy link
Member Author

Create a rocknet

See dedicated issue (#2009) for additional details and history.
To simplify the webUI and keep things consistent, this PR proposes to use the existing interface to create a network, to create a rocknet. We can thus find a new Connection type option labeled docker (should we change to rocknet?):
image

In order to follow the same level of customization than what is offered for system connections, we keep the same configuration method set as "Auto" by default, with the possibility of selecting "Manual" parameters. In the latter case, docker specific fields will appear, corresponding to the options offered by the docker network create command.

As per the docker documentation, these are as follows:

Option Description
--attachable Enable manual container attachment
--aux-address Auxiliary IPv4 or IPv6 addresses used by Network driver
--config-from The network from which copying the configuration
--config-only Create a configuration only network
--driver Driver to manage the Network
--gateway IPv4 or IPv6 Gateway for the master subnet
--ingress Create swarm routing-mesh network
--internal Restrict external access to the network
--ip-range Allocate container ip from a sub-range
--ipam-driver IP Address Management Driver
--ipam-opt Set IPAM driver specific options
--ipv6 Enable IPv6 networking
--label Set metadata on a network
--scope Control the network’s scope
--subnet Subnet in CIDR format that represents a network segment
--opt Set driver specific options:
com.docker.network.bridge.name bridge name to be used when creating the Linux bridge
com.docker.network.bridge.enable_ip_masquerade Enable IP masquerading
com.docker.network.bridge.enable_icc Enable or Disable Inter Container Connectivity
com.docker.network.bridge.host_binding_ipv4 Default IP when binding container ports
com.docker.network.driver.mtu Set the containers network MTU

In our case, we only support the bridge driver (to begin with, at least), which leaves us with the following parameters:

Option Description
--aux-address Auxiliary IPv4 or IPv6 addresses used by Network driver
--gateway IPv4 or IPv6 Gateway for the master subnet
--internal Restrict external access to the network
--ip-range Allocate container ip from a sub-range
--ipv6 Enable IPv6 networking
--subnet Subnet in CIDR format that represents a network segment
--opt Set driver specific options:
com.docker.network.bridge.enable_ip_masquerade Enable IP masquerading
com.docker.network.bridge.enable_icc Enable or Disable Inter Container Connectivity
com.docker.network.bridge.host_binding_ipv4 Default IP when binding container ports
com.docker.network.driver.mtu Set the containers network MTU

To continue with consistency, one can also edit an existing rocknet. Upon edit, the network is disconnected from any attached containers, deleted, and re-created using new settings, before being reconnected to all the containers that were attached to the original network. @phillxnet: note here that I have found easy to fill in incompatible parameters for a rocknet in case one wants to manually set them. As a result, I have included a safety here to first "write down" the existing settings of a rocknet before attempting to re-create it with new settings. If the latter fails, we re-create and re-connect the rocknet as it was before (thanks to its original settings having been "written down").

Finally, one can also easily delete a rocknet using the trash icon, in which case the attached containers are first disconnected.

There are also a few UI elements of interest:

  • in System > Network, a new column docker name prints the user-friendly name for docker networks (either rocknet or container_link).
  • in System > Network, clicking a rocknet's name displays further details (as it currently does for other connection types). In addition to more networking details, the attached container(s) and associated rock-on(s) is/are displayed.
  • in the Dashboard > Network widget, all docker networks are listed using their docker name (user-friendly name).
  • this PR proposes to add a new JQuery validation rule to ensure the connection name (docker, and others) does not match with important names reserved by the system ('host', 'bridge', 'null'). Attempting to create a docker network with either one of these names would fail, and would most likely not be recommended for any other type of connection so this rule would apply to all connection types. @phillxnet, please correct me if I'm wrong about that one.

Database modifications

This section of the PR requires the creation of two new models in the storageadmin database:

  • BridgeConnection: modeled after the other network connection-related models such as EthernetConnection, the BridgeConnection model would include all docker network-related parameters such as its docker name (user-friendly name as seen by docker network ls--in opposition to nmcli's connection name). Similar to other network-related models, all "lower-level" network information (nmcli-related) are stored in the keyed NetworkConnection model object.
  • DContainerNetwork: simple model describing container-rocknet connections. The former is done through keying to the corresponding DContainer model object, whereas the latter is done through keying to the corresponding BridgeConnection model object. Although simple and not storing any "new" information on its own, I believe this model is necessary on its own as a docker network can connect to many different containers at the same time, and a container can be connected to multiple docker network at the same time. @phillxnet, it might be possible to skip this model if one can link all this with just the BridgeConnection and DContainer models by themselves (with some sort of many-to-many relationships), but I personally am not very familiar with this and the current PR (with its DContainerNetwork model) does represent a simple way to achieve this. If the performance cost is too high (if any), I can try exploring alternatives further; let me know.

@FroggyFlox
Copy link
Member Author

Unpublish port and connect to rocknets

See dedicated issue (#2013) for additional details and history.
In order to make rocknets useful, we need to create an interface to connect containers (rock-ons) to them and edit the ports described in a rock-on JSON file. Indeed, as described by @flukejones in the issue referenced above (#1982), joining a docker network would be useful/required only if predefined ports are not published.
As a result, this PR offers an advanced post-install customization option to advanced users to un-publish predefined ports and then rocknet(s) of interest.
This PR thus proposes to offer a new button (Networking) in the post-install customization modal.

(Un)publish ports

Clicking on this Networking button leads to the following new page:
image

As all docker networking are applied at the container level, we surface all options below for each container:

  • Publish/Un-publish ports: each port defined for each container in the rock-on JSON file is listed along with its current publication status. The user can thus choose to (un)publish as it desires.
  • A special indication is made for the port used for the webUI (using a "i" icon). Its corresponding tooltip reminds the user that the port in question is used for providing access to the webUI, warning of the consequences of unpublishing this port.

Of note, if the UI port is unpublished, the "webUI" access button for the given rock-on is greyed out (and disabled) accompanied by a tooltip (on mouse hover) explaining that the button was disabled due to the fact that the corresponding UI port is currently unpublished. See screenshot below for illustration:
image

Rocknets connections

  • The bottom half of the modal allows the user to connect each/any container of the rock-on to one or more rocknet(s), or none:
    • all currently existing rocknets are displayed as options
    • the user has the option to create a new rocknet on the fly (or as many as desired) by simply typing a new name in the field. In this case, a new rocknet will be created using docker default settings before creating the rock-on.
    • all currently connected rocknets will be listed for each container. As a result, one can disconnect a rocknet from the container(s) of interest by simply deleting its name from the list and proceeding with the update.

In the example below, for instance, both containers of the rock-on will be connected to the rocknet rocknet01, whereas only the second container will be connected to the rocknet rocknet02.
image

After submission and update of the rock-on, we can verify they are connected to the correct rocknets:

rockdev:~ # docker inspect rocknet01
(...)
        "Containers": {
            "8054e4b0d41a8a09db25a8b228de228cdd372acda4c5885ca0b3e68a266304b4": {
                "Name": "alpine2p1",
                "EndpointID": "dbbecfe3306e3ba6f6889efb455c421802e2f83c9051f553655bff807186de0a",
                "MacAddress": "02:42:ac:18:00:02",
                "IPv4Address": "172.24.0.2/16",
                "IPv6Address": ""
            },
            "8dd41e44d5a01272334fd0c2aa59f66f16f3dfbf20fd5bf0e1196e97e3792d28": {
                "Name": "alpine2p2",
                "EndpointID": "04b8dad378e4e9a61a6d1c0e9cccd1cfb49d48cd785a8f01c321ba350c9bc323",
                "MacAddress": "02:42:ac:18:00:03",
                "IPv4Address": "172.24.0.3/16",
                "IPv6Address": ""
            }
        },
(...)
rockdev:~ # docker inspect rocknet02
(...)
        "Containers": {
            "8dd41e44d5a01272334fd0c2aa59f66f16f3dfbf20fd5bf0e1196e97e3792d28": {
                "Name": "alpine2p2",
                "EndpointID": "467e37ab7403f29c9a8c322c66cc4da5be58bc5ad5bc21510a567419e6161106",
                "MacAddress": "02:42:ac:19:00:02",
                "IPv4Address": "172.25.0.2/16",
                "IPv6Address": ""
            }
        },
(...)

Several points of interest:

  • The Networking button in the post-install customization window is disabled for rock-ons using "host" networking as they cannot be connected to a docker network (docker connect fails) and their ports publishing instructions are already ignored by docker run.
  • We check if no setting is changed when the user click on the "Next" button, and informs the user accordingly instead of proceeding without change.
  • As connecting/disconnecting a rocknet from a container can be done without the need to recreate the container, this PR also introduces a live update when applicable (rocknet changes, but no change in port publication). In this mode, all rocknet-related operations are completed without un-installing the rock-on first, therefore leading to a much lighter and faster process.
  • As in the rocknet creation page (from System > Network), we do not allow the creation of rocknet with names reserved for system use ('host', 'bridge', 'null'). These would fail anyway so we're preventing these from being entered as rocknet name here.

Database modifications

The newly-added publish/unpublish option needs to be stored in the DPort storageadmin model. This PR thus proposes to add a new boolean field (publish) to this model.

API changes

A new storageadmin view was created: RockOnNetworkView with its respective URL /api/rockons/networks/{rid}. This view simply allows to list all rocknets currently connected to a given rock-on.

@FroggyFlox
Copy link
Member Author

Miscellaneous

As implemented docker networks all use the bridge driver, we are now dealing with/parsing network connections. As a result, we no longer throw errors in the logs: unknown ctype: bridge. Although this was purely "cosmetic" (no damaging effect on Rockstor function), this should now be cleaner and less confusing for users who use to see an error in the logs (as reported several times in the forum).

As discussed in my fork, many docker-related base functions are proposed to be moved to system.docker.py from storageadmin.views.rockons_helpers.py for better organization and hierarchy in the project. Immense thanks to @phillxnet for his critical intervention in this regard.

Unit tests in storageadmin.test_network.py are currently disabled as they all need updating due to recent API changes. This PR thus also includes updates to this series of unit testing. It also updates existing system.network.py to account for new bridge connections- and rocknets-related logic.

In the section dedicated to join rocknet(s) to container(s), I've included a help icon (question mark) intended to link to the corresponding section of Rockstor's documentation. As such section does not yet exist, it points only to the documentation's main page at the moment and would thus need to be updated once that is implemented.

Database migration

As described above, several changes were made to storageadmin, including edits to existing models, as well as addition of new models. A migration was thus created as follows...

rockdev:~ # /opt/build/bin/django makemigrations storageadmin
Migrations for 'storageadmin':
  0013_auto_20200815_2004.py:
    - Create model BridgeConnection
    - Create model DContainerNetwork
    - Add field publish to dport
    - Alter unique_together for dcontainerlink (1 constraint(s))
    - Alter unique_together for dcontainernetwork (1 constraint(s))

... and applied successfully:

rockdev:~ # /opt/build/bin/django migrate storageadmin
Operations to perform:
  Apply all migrations: storageadmin
Running migrations:
  Rendering model states... DONE
  Applying storageadmin.0013_auto_20200815_2004... OK
The following content types are stale and need to be deleted:

    storageadmin | networkinterface
    storageadmin | netatalkshare
    storageadmin | poolstatistic
    storageadmin | sharestatistic

Any objects related to these content types by a foreign key will also
be deleted. Are you sure you want to delete these content types?
If you're unsure, answer 'no'.

    Type 'yes' to continue, or 'no' to cancel: no

Testing

Leap 15.2 (ISO install):

Ran 212 tests in 58.200s

FAILED (failures=5, errors=1)

Full Testing Outputs

Leap15.2 (ISO install)
rockdev:/opt/build # ./bin/test -v 3
Creating test database for alias 'default' ('test_storageadmin')...
Operations to perform:
  Synchronize unmigrated apps: staticfiles, rest_framework, pipeline, messages, django_ztask
  Apply all migrations: oauth2_provider, sessions, admin, sites, auth, contenttypes, smart_manager, storageadmin
Synchronizing apps without migrations:
Running pre-migrate handlers for application auth
Running pre-migrate handlers for application contenttypes
Running pre-migrate handlers for application sessions
Running pre-migrate handlers for application sites
Running pre-migrate handlers for application admin
Running pre-migrate handlers for application storageadmin
Running pre-migrate handlers for application rest_framework
Running pre-migrate handlers for application smart_manager
Running pre-migrate handlers for application oauth2_provider
Running pre-migrate handlers for application django_ztask
  Creating tables...
    Creating table django_ztask_task
    Running deferred SQL...
  Installing custom SQL...
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Running migrations:
  Rendering model states... DONE (9.371s)
  Applying contenttypes.0001_initial... OK (0.076s)
  Applying auth.0001_initial... OK (0.321s)
  Applying admin.0001_initial... OK (0.128s)
  Applying contenttypes.0002_remove_content_type_name... OK (0.077s)
  Applying auth.0002_alter_permission_name_max_length... OK (0.051s)
  Applying auth.0003_alter_user_email_max_length... OK (0.048s)
  Applying auth.0004_alter_user_username_opts... OK (0.030s)
  Applying auth.0005_alter_user_last_login_null... OK (0.050s)
  Applying auth.0006_require_contenttypes_0002... OK (0.011s)
  Applying oauth2_provider.0001_initial... OK (0.453s)
  Applying oauth2_provider.0002_08_updates... OK (0.248s)
  Applying sessions.0001_initial... OK (0.089s)
  Applying sites.0001_initial... OK (0.046s)
  Applying smart_manager.0001_initial... OK (0.393s)
  Applying smart_manager.0002_auto_20170216_1212... OK (0.040s)
  Applying storageadmin.0001_initial... OK (8.442s)
  Applying storageadmin.0002_auto_20161125_0051... OK (0.497s)
  Applying storageadmin.0003_auto_20170114_1332... OK (0.645s)
  Applying storageadmin.0004_auto_20170523_1140... OK (0.287s)
  Applying storageadmin.0005_auto_20180913_0923... OK (1.471s)
  Applying storageadmin.0006_dcontainerargs... OK (0.586s)
  Applying storageadmin.0007_auto_20181210_0740... OK (1.108s)
  Applying storageadmin.0008_auto_20190115_1637... OK (2.291s)
  Applying storageadmin.0009_auto_20200210_1948... OK (0.386s)
  Applying storageadmin.0010_sambashare_time_machine... OK (0.849s)
  Applying storageadmin.0011_auto_20200314_1207... OK (0.622s)
  Applying storageadmin.0012_auto_20200429_1428... OK (1.883s)
  Applying storageadmin.0013_auto_20200806_0948... OK (1.726s)
Running post-migrate handlers for application auth
Adding permission 'auth | permission | Can add permission'
Adding permission 'auth | permission | Can change permission'
Adding permission 'auth | permission | Can delete permission'
Adding permission 'auth | group | Can add group'
Adding permission 'auth | group | Can change group'
Adding permission 'auth | group | Can delete group'
Adding permission 'auth | user | Can add user'
Adding permission 'auth | user | Can change user'
Adding permission 'auth | user | Can delete user'
Running post-migrate handlers for application contenttypes
Adding permission 'contenttypes | content type | Can add content type'
Adding permission 'contenttypes | content type | Can change content type'
Adding permission 'contenttypes | content type | Can delete content type'
Running post-migrate handlers for application sessions
Adding permission 'sessions | session | Can add session'
Adding permission 'sessions | session | Can change session'
Adding permission 'sessions | session | Can delete session'
Running post-migrate handlers for application sites
Adding permission 'sites | site | Can add site'
Adding permission 'sites | site | Can change site'
Adding permission 'sites | site | Can delete site'
Creating example.com Site object
Resetting sequence
Running post-migrate handlers for application admin
Adding permission 'admin | log entry | Can add log entry'
Adding permission 'admin | log entry | Can change log entry'
Adding permission 'admin | log entry | Can delete log entry'
Running post-migrate handlers for application storageadmin
Adding permission 'storageadmin | pool | Can add pool'
Adding permission 'storageadmin | pool | Can change pool'
Adding permission 'storageadmin | pool | Can delete pool'
Adding permission 'storageadmin | disk | Can add disk'
Adding permission 'storageadmin | disk | Can change disk'
Adding permission 'storageadmin | disk | Can delete disk'
Adding permission 'storageadmin | snapshot | Can add snapshot'
Adding permission 'storageadmin | snapshot | Can change snapshot'
Adding permission 'storageadmin | snapshot | Can delete snapshot'
Adding permission 'storageadmin | share | Can add share'
Adding permission 'storageadmin | share | Can change share'
Adding permission 'storageadmin | share | Can delete share'
Adding permission 'storageadmin | nfs export group | Can add nfs export group'
Adding permission 'storageadmin | nfs export group | Can change nfs export group'
Adding permission 'storageadmin | nfs export group | Can delete nfs export group'
Adding permission 'storageadmin | nfs export | Can add nfs export'
Adding permission 'storageadmin | nfs export | Can change nfs export'
Adding permission 'storageadmin | nfs export | Can delete nfs export'
Adding permission 'storageadmin | iscsi target | Can add iscsi target'
Adding permission 'storageadmin | iscsi target | Can change iscsi target'
Adding permission 'storageadmin | iscsi target | Can delete iscsi target'
Adding permission 'storageadmin | api keys | Can add api keys'
Adding permission 'storageadmin | api keys | Can change api keys'
Adding permission 'storageadmin | api keys | Can delete api keys'
Adding permission 'storageadmin | network connection | Can add network connection'
Adding permission 'storageadmin | network connection | Can change network connection'
Adding permission 'storageadmin | network connection | Can delete network connection'
Adding permission 'storageadmin | network device | Can add network device'
Adding permission 'storageadmin | network device | Can change network device'
Adding permission 'storageadmin | network device | Can delete network device'
Adding permission 'storageadmin | ethernet connection | Can add ethernet connection'
Adding permission 'storageadmin | ethernet connection | Can change ethernet connection'
Adding permission 'storageadmin | ethernet connection | Can delete ethernet connection'
Adding permission 'storageadmin | team connection | Can add team connection'
Adding permission 'storageadmin | team connection | Can change team connection'
Adding permission 'storageadmin | team connection | Can delete team connection'
Adding permission 'storageadmin | bond connection | Can add bond connection'
Adding permission 'storageadmin | bond connection | Can change bond connection'
Adding permission 'storageadmin | bond connection | Can delete bond connection'
Adding permission 'storageadmin | bridge connection | Can add bridge connection'
Adding permission 'storageadmin | bridge connection | Can change bridge connection'
Adding permission 'storageadmin | bridge connection | Can delete bridge connection'
Adding permission 'storageadmin | appliance | Can add appliance'
Adding permission 'storageadmin | appliance | Can change appliance'
Adding permission 'storageadmin | appliance | Can delete appliance'
Adding permission 'storageadmin | support case | Can add support case'
Adding permission 'storageadmin | support case | Can change support case'
Adding permission 'storageadmin | support case | Can delete support case'
Adding permission 'storageadmin | dashboard config | Can add dashboard config'
Adding permission 'storageadmin | dashboard config | Can change dashboard config'
Adding permission 'storageadmin | dashboard config | Can delete dashboard config'
Adding permission 'storageadmin | group | Can add group'
Adding permission 'storageadmin | group | Can change group'
Adding permission 'storageadmin | group | Can delete group'
Adding permission 'storageadmin | user | Can add user'
Adding permission 'storageadmin | user | Can change user'
Adding permission 'storageadmin | user | Can delete user'
Adding permission 'storageadmin | samba share | Can add samba share'
Adding permission 'storageadmin | samba share | Can change samba share'
Adding permission 'storageadmin | samba share | Can delete samba share'
Adding permission 'storageadmin | samba custom config | Can add samba custom config'
Adding permission 'storageadmin | samba custom config | Can change samba custom config'
Adding permission 'storageadmin | samba custom config | Can delete samba custom config'
Adding permission 'storageadmin | posix ac ls | Can add posix ac ls'
Adding permission 'storageadmin | posix ac ls | Can change posix ac ls'
Adding permission 'storageadmin | posix ac ls | Can delete posix ac ls'
Adding permission 'storageadmin | pool scrub | Can add pool scrub'
Adding permission 'storageadmin | pool scrub | Can change pool scrub'
Adding permission 'storageadmin | pool scrub | Can delete pool scrub'
Adding permission 'storageadmin | setup | Can add setup'
Adding permission 'storageadmin | setup | Can change setup'
Adding permission 'storageadmin | setup | Can delete setup'
Adding permission 'storageadmin | sftp | Can add sftp'
Adding permission 'storageadmin | sftp | Can change sftp'
Adding permission 'storageadmin | sftp | Can delete sftp'
Adding permission 'storageadmin | plugin | Can add plugin'
Adding permission 'storageadmin | plugin | Can change plugin'
Adding permission 'storageadmin | plugin | Can delete plugin'
Adding permission 'storageadmin | advanced nfs export | Can add advanced nfs export'
Adding permission 'storageadmin | advanced nfs export | Can change advanced nfs export'
Adding permission 'storageadmin | advanced nfs export | Can delete advanced nfs export'
Adding permission 'storageadmin | oauth app | Can add oauth app'
Adding permission 'storageadmin | oauth app | Can change oauth app'
Adding permission 'storageadmin | oauth app | Can delete oauth app'
Adding permission 'storageadmin | pool balance | Can add pool balance'
Adding permission 'storageadmin | pool balance | Can change pool balance'
Adding permission 'storageadmin | pool balance | Can delete pool balance'
Adding permission 'storageadmin | tls certificate | Can add tls certificate'
Adding permission 'storageadmin | tls certificate | Can change tls certificate'
Adding permission 'storageadmin | tls certificate | Can delete tls certificate'
Adding permission 'storageadmin | rock on | Can add rock on'
Adding permission 'storageadmin | rock on | Can change rock on'
Adding permission 'storageadmin | rock on | Can delete rock on'
Adding permission 'storageadmin | d image | Can add d image'
Adding permission 'storageadmin | d image | Can change d image'
Adding permission 'storageadmin | d image | Can delete d image'
Adding permission 'storageadmin | d container | Can add d container'
Adding permission 'storageadmin | d container | Can change d container'
Adding permission 'storageadmin | d container | Can delete d container'
Adding permission 'storageadmin | d container link | Can add d container link'
Adding permission 'storageadmin | d container link | Can change d container link'
Adding permission 'storageadmin | d container link | Can delete d container link'
Adding permission 'storageadmin | d container network | Can add d container network'
Adding permission 'storageadmin | d container network | Can change d container network'
Adding permission 'storageadmin | d container network | Can delete d container network'
Adding permission 'storageadmin | d port | Can add d port'
Adding permission 'storageadmin | d port | Can change d port'
Adding permission 'storageadmin | d port | Can delete d port'
Adding permission 'storageadmin | d volume | Can add d volume'
Adding permission 'storageadmin | d volume | Can change d volume'
Adding permission 'storageadmin | d volume | Can delete d volume'
Adding permission 'storageadmin | container option | Can add container option'
Adding permission 'storageadmin | container option | Can change container option'
Adding permission 'storageadmin | container option | Can delete container option'
Adding permission 'storageadmin | d container args | Can add d container args'
Adding permission 'storageadmin | d container args | Can change d container args'
Adding permission 'storageadmin | d container args | Can delete d container args'
Adding permission 'storageadmin | d custom config | Can add d custom config'
Adding permission 'storageadmin | d custom config | Can change d custom config'
Adding permission 'storageadmin | d custom config | Can delete d custom config'
Adding permission 'storageadmin | d container env | Can add d container env'
Adding permission 'storageadmin | d container env | Can change d container env'
Adding permission 'storageadmin | d container env | Can delete d container env'
Adding permission 'storageadmin | d container device | Can add d container device'
Adding permission 'storageadmin | d container device | Can change d container device'
Adding permission 'storageadmin | d container device | Can delete d container device'
Adding permission 'storageadmin | d container label | Can add d container label'
Adding permission 'storageadmin | d container label | Can change d container label'
Adding permission 'storageadmin | d container label | Can delete d container label'
Adding permission 'storageadmin | smart capability | Can add smart capability'
Adding permission 'storageadmin | smart capability | Can change smart capability'
Adding permission 'storageadmin | smart capability | Can delete smart capability'
Adding permission 'storageadmin | smart attribute | Can add smart attribute'
Adding permission 'storageadmin | smart attribute | Can change smart attribute'
Adding permission 'storageadmin | smart attribute | Can delete smart attribute'
Adding permission 'storageadmin | smart error log | Can add smart error log'
Adding permission 'storageadmin | smart error log | Can change smart error log'
Adding permission 'storageadmin | smart error log | Can delete smart error log'
Adding permission 'storageadmin | smart error log summary | Can add smart error log summary'
Adding permission 'storageadmin | smart error log summary | Can change smart error log summary'
Adding permission 'storageadmin | smart error log summary | Can delete smart error log summary'
Adding permission 'storageadmin | smart test log | Can add smart test log'
Adding permission 'storageadmin | smart test log | Can change smart test log'
Adding permission 'storageadmin | smart test log | Can delete smart test log'
Adding permission 'storageadmin | smart test log detail | Can add smart test log detail'
Adding permission 'storageadmin | smart test log detail | Can change smart test log detail'
Adding permission 'storageadmin | smart test log detail | Can delete smart test log detail'
Adding permission 'storageadmin | smart identity | Can add smart identity'
Adding permission 'storageadmin | smart identity | Can change smart identity'
Adding permission 'storageadmin | smart identity | Can delete smart identity'
Adding permission 'storageadmin | smart info | Can add smart info'
Adding permission 'storageadmin | smart info | Can change smart info'
Adding permission 'storageadmin | smart info | Can delete smart info'
Adding permission 'storageadmin | config backup | Can add config backup'
Adding permission 'storageadmin | config backup | Can change config backup'
Adding permission 'storageadmin | config backup | Can delete config backup'
Adding permission 'storageadmin | email client | Can add email client'
Adding permission 'storageadmin | email client | Can change email client'
Adding permission 'storageadmin | email client | Can delete email client'
Adding permission 'storageadmin | update subscription | Can add update subscription'
Adding permission 'storageadmin | update subscription | Can change update subscription'
Adding permission 'storageadmin | update subscription | Can delete update subscription'
Adding permission 'storageadmin | pincard | Can add pincard'
Adding permission 'storageadmin | pincard | Can change pincard'
Adding permission 'storageadmin | pincard | Can delete pincard'
Adding permission 'storageadmin | installed plugin | Can add installed plugin'
Adding permission 'storageadmin | installed plugin | Can change installed plugin'
Adding permission 'storageadmin | installed plugin | Can delete installed plugin'
Running post-migrate handlers for application rest_framework
Running post-migrate handlers for application smart_manager
Adding permission 'smart_manager | cpu metric | Can add cpu metric'
Adding permission 'smart_manager | cpu metric | Can change cpu metric'
Adding permission 'smart_manager | cpu metric | Can delete cpu metric'
Adding permission 'smart_manager | disk stat | Can add disk stat'
Adding permission 'smart_manager | disk stat | Can change disk stat'
Adding permission 'smart_manager | disk stat | Can delete disk stat'
Adding permission 'smart_manager | load avg | Can add load avg'
Adding permission 'smart_manager | load avg | Can change load avg'
Adding permission 'smart_manager | load avg | Can delete load avg'
Adding permission 'smart_manager | mem info | Can add mem info'
Adding permission 'smart_manager | mem info | Can change mem info'
Adding permission 'smart_manager | mem info | Can delete mem info'
Adding permission 'smart_manager | vm stat | Can add vm stat'
Adding permission 'smart_manager | vm stat | Can change vm stat'
Adding permission 'smart_manager | vm stat | Can delete vm stat'
Adding permission 'smart_manager | service | Can add service'
Adding permission 'smart_manager | service | Can change service'
Adding permission 'smart_manager | service | Can delete service'
Adding permission 'smart_manager | service status | Can add service status'
Adding permission 'smart_manager | service status | Can change service status'
Adding permission 'smart_manager | service status | Can delete service status'
Adding permission 'smart_manager | s probe | Can add s probe'
Adding permission 'smart_manager | s probe | Can change s probe'
Adding permission 'smart_manager | s probe | Can delete s probe'
Adding permission 'smart_manager | nfsd call distribution | Can add nfsd call distribution'
Adding permission 'smart_manager | nfsd call distribution | Can change nfsd call distribution'
Adding permission 'smart_manager | nfsd call distribution | Can delete nfsd call distribution'
Adding permission 'smart_manager | nfsd client distribution | Can add nfsd client distribution'
Adding permission 'smart_manager | nfsd client distribution | Can change nfsd client distribution'
Adding permission 'smart_manager | nfsd client distribution | Can delete nfsd client distribution'
Adding permission 'smart_manager | nfsd share distribution | Can add nfsd share distribution'
Adding permission 'smart_manager | nfsd share distribution | Can change nfsd share distribution'
Adding permission 'smart_manager | nfsd share distribution | Can delete nfsd share distribution'
Adding permission 'smart_manager | pool usage | Can add pool usage'
Adding permission 'smart_manager | pool usage | Can change pool usage'
Adding permission 'smart_manager | pool usage | Can delete pool usage'
Adding permission 'smart_manager | net stat | Can add net stat'
Adding permission 'smart_manager | net stat | Can change net stat'
Adding permission 'smart_manager | net stat | Can delete net stat'
Adding permission 'smart_manager | nfsd share client distribution | Can add nfsd share client distribution'
Adding permission 'smart_manager | nfsd share client distribution | Can change nfsd share client distribution'
Adding permission 'smart_manager | nfsd share client distribution | Can delete nfsd share client distribution'
Adding permission 'smart_manager | share usage | Can add share usage'
Adding permission 'smart_manager | share usage | Can change share usage'
Adding permission 'smart_manager | share usage | Can delete share usage'
Adding permission 'smart_manager | nfsd uid gid distribution | Can add nfsd uid gid distribution'
Adding permission 'smart_manager | nfsd uid gid distribution | Can change nfsd uid gid distribution'
Adding permission 'smart_manager | nfsd uid gid distribution | Can delete nfsd uid gid distribution'
Adding permission 'smart_manager | task definition | Can add task definition'
Adding permission 'smart_manager | task definition | Can change task definition'
Adding permission 'smart_manager | task definition | Can delete task definition'
Adding permission 'smart_manager | task | Can add task'
Adding permission 'smart_manager | task | Can change task'
Adding permission 'smart_manager | task | Can delete task'
Adding permission 'smart_manager | replica | Can add replica'
Adding permission 'smart_manager | replica | Can change replica'
Adding permission 'smart_manager | replica | Can delete replica'
Adding permission 'smart_manager | replica trail | Can add replica trail'
Adding permission 'smart_manager | replica trail | Can change replica trail'
Adding permission 'smart_manager | replica trail | Can delete replica trail'
Adding permission 'smart_manager | replica share | Can add replica share'
Adding permission 'smart_manager | replica share | Can change replica share'
Adding permission 'smart_manager | replica share | Can delete replica share'
Adding permission 'smart_manager | receive trail | Can add receive trail'
Adding permission 'smart_manager | receive trail | Can change receive trail'
Adding permission 'smart_manager | receive trail | Can delete receive trail'
Running post-migrate handlers for application oauth2_provider
Adding permission 'oauth2_provider | application | Can add application'
Adding permission 'oauth2_provider | application | Can change application'
Adding permission 'oauth2_provider | application | Can delete application'
Adding permission 'oauth2_provider | grant | Can add grant'
Adding permission 'oauth2_provider | grant | Can change grant'
Adding permission 'oauth2_provider | grant | Can delete grant'
Adding permission 'oauth2_provider | access token | Can add access token'
Adding permission 'oauth2_provider | access token | Can change access token'
Adding permission 'oauth2_provider | access token | Can delete access token'
Adding permission 'oauth2_provider | refresh token | Can add refresh token'
Adding permission 'oauth2_provider | refresh token | Can change refresh token'
Adding permission 'oauth2_provider | refresh token | Can delete refresh token'
Running post-migrate handlers for application django_ztask
Adding permission 'django_ztask | task | Can add task'
Adding permission 'django_ztask | task | Can change task'
Adding permission 'django_ztask | task | Can delete task'
Creating test database for alias 'smart_manager' ('test_smartdb')...
Operations to perform:
  Synchronize unmigrated apps: staticfiles, rest_framework, pipeline, messages, django_ztask
  Apply all migrations: oauth2_provider, sessions, admin, sites, auth, contenttypes, smart_manager, storageadmin
Synchronizing apps without migrations:
Running pre-migrate handlers for application auth
Running pre-migrate handlers for application contenttypes
Running pre-migrate handlers for application sessions
Running pre-migrate handlers for application sites
Running pre-migrate handlers for application admin
Running pre-migrate handlers for application storageadmin
Running pre-migrate handlers for application rest_framework
Running pre-migrate handlers for application smart_manager
Running pre-migrate handlers for application oauth2_provider
Running pre-migrate handlers for application django_ztask
  Creating tables...
    Running deferred SQL...
  Installing custom SQL...
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Loading 'initial_data' fixtures...
Checking '/opt/build' for fixtures...
No fixture 'initial_data' in '/opt/build'.
Running migrations:
  Rendering model states... DONE (10.925s)
  Applying contenttypes.0001_initial... OK (0.036s)
  Applying auth.0001_initial... OK (0.045s)
  Applying admin.0001_initial... OK (0.038s)
  Applying contenttypes.0002_remove_content_type_name... OK (0.072s)
  Applying auth.0002_alter_permission_name_max_length... OK (0.030s)
  Applying auth.0003_alter_user_email_max_length... OK (0.041s)
  Applying auth.0004_alter_user_username_opts... OK (0.039s)
  Applying auth.0005_alter_user_last_login_null... OK (0.043s)
  Applying auth.0006_require_contenttypes_0002... OK (0.010s)
  Applying oauth2_provider.0001_initial... OK (0.120s)
  Applying oauth2_provider.0002_08_updates... OK (0.110s)
  Applying sessions.0001_initial... OK (0.026s)
  Applying sites.0001_initial... OK (0.022s)
  Applying smart_manager.0001_initial... OK (1.490s)
  Applying smart_manager.0002_auto_20170216_1212... OK (0.037s)
  Applying storageadmin.0001_initial... OK (5.907s)
  Applying storageadmin.0002_auto_20161125_0051... OK (0.450s)
  Applying storageadmin.0003_auto_20170114_1332... OK (0.450s)
  Applying storageadmin.0004_auto_20170523_1140... OK (0.236s)
  Applying storageadmin.0005_auto_20180913_0923... OK (0.898s)
  Applying storageadmin.0006_dcontainerargs... OK (0.229s)
  Applying storageadmin.0007_auto_20181210_0740... OK (0.601s)
  Applying storageadmin.0008_auto_20190115_1637... OK (0.957s)
  Applying storageadmin.0009_auto_20200210_1948... OK (0.307s)
  Applying storageadmin.0010_sambashare_time_machine... OK (0.330s)
  Applying storageadmin.0011_auto_20200314_1207... OK (0.621s)
  Applying storageadmin.0012_auto_20200429_1428... OK (0.913s)
  Applying storageadmin.0013_auto_20200806_0948... OK (1.342s)
Running post-migrate handlers for application auth
Running post-migrate handlers for application contenttypes
Running post-migrate handlers for application sessions
Running post-migrate handlers for application sites
Running post-migrate handlers for application admin
Running post-migrate handlers for application storageadmin
Running post-migrate handlers for application rest_framework
Running post-migrate handlers for application smart_manager
Running post-migrate handlers for application oauth2_provider
Running post-migrate handlers for application django_ztask
test_get_sname (storageadmin.tests.test_config_backup.ConfigBackupTests) ... ok
test_update_rockon_shares (storageadmin.tests.test_config_backup.ConfigBackupTests) ... ok
test_valid_requests (storageadmin.tests.test_config_backup.ConfigBackupTests) ... ok
test_validate_install_config (storageadmin.tests.test_config_backup.ConfigBackupTests) ... ok
test_validate_update_config (storageadmin.tests.test_config_backup.ConfigBackupTests) ... ok
test_blink_drive (storageadmin.tests.test_disks.DiskTests) ... ok
test_btrfs_disk_import_fail (storageadmin.tests.test_disks.DiskTests) ... ok
test_disable_smart (storageadmin.tests.test_disks.DiskTests) ... ok
test_disk_scan (storageadmin.tests.test_disks.DiskTests) ... ok
test_disk_wipe (storageadmin.tests.test_disks.DiskTests) ... ok
test_enable_smart (storageadmin.tests.test_disks.DiskTests) ... ok
test_enable_smart_when_available (storageadmin.tests.test_disks.DiskTests) ... ok
test_invalid_command (storageadmin.tests.test_disks.DiskTests) ... ok
test_invalid_disk_wipe (storageadmin.tests.test_disks.DiskTests) ... ok
test_delete_requests (storageadmin.tests.test_group.GroupTests) ... ok
test_get_requests (storageadmin.tests.test_group.GroupTests) ... ok
test_post_requests (storageadmin.tests.test_group.GroupTests) ... FAIL
test_create_samba_share (storageadmin.tests.test_samba.SambaTests) ... ok
test_create_samba_share_existing_export (storageadmin.tests.test_samba.SambaTests) ... ok
test_create_samba_share_incorrect_share (storageadmin.tests.test_samba.SambaTests) ... ok
test_delete_requests_1 (storageadmin.tests.test_samba.SambaTests) ... ok
test_delete_requests_2 (storageadmin.tests.test_samba.SambaTests) ... ok
test_get_non_existent (storageadmin.tests.test_samba.SambaTests) ... ok
test_post_requests_1 (storageadmin.tests.test_samba.SambaTests) ... ok
test_post_requests_2 (storageadmin.tests.test_samba.SambaTests) ... ERROR
test_post_requests_no_admin (storageadmin.tests.test_samba.SambaTests) ... ok
test_put_requests_1 (storageadmin.tests.test_samba.SambaTests) ... ok
test_put_requests_2 (storageadmin.tests.test_samba.SambaTests) ... ok
test_validate_input (storageadmin.tests.test_samba.SambaTests) ... ok
test_validate_input_error (storageadmin.tests.test_samba.SambaTests) ... ok
test_delete_requests (storageadmin.tests.test_user.UserTests) ... FAIL
test_duplicate_name2 (storageadmin.tests.test_user.UserTests) ... ok
test_email_validation (storageadmin.tests.test_user.UserTests) ... ok
test_get (storageadmin.tests.test_user.UserTests) ... ok
test_invalid_UID (storageadmin.tests.test_user.UserTests) ... ok
test_post_requests (storageadmin.tests.test_user.UserTests) ... FAIL
test_pubkey_validation (storageadmin.tests.test_user.UserTests) ... ok
test_put_requests (storageadmin.tests.test_user.UserTests) ... ok
test_delete_requests (storageadmin.tests.test_appliances.AppliancesTests) ... ok
test_get (storageadmin.tests.test_appliances.AppliancesTests) ... ok
test_post_requests_1 (storageadmin.tests.test_appliances.AppliancesTests) ... ok
test_post_requests_2 (storageadmin.tests.test_appliances.AppliancesTests) ... ok
test_auto_update_status_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_bootstrap_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_current_user_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_current_version_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_disable_auto_update_command (storageadmin.tests.test_commands.CommandTests) ... FAIL
test_enable_auto_update_command (storageadmin.tests.test_commands.CommandTests) ... FAIL
test_kernel_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_reboot (storageadmin.tests.test_commands.CommandTests) ... ok
test_refresh_disk_state (storageadmin.tests.test_commands.CommandTests) ... ok
test_refresh_pool_state (storageadmin.tests.test_commands.CommandTests) ... ok
test_refresh_share_state (storageadmin.tests.test_commands.CommandTests) ... ok
test_refresh_snapshot_state (storageadmin.tests.test_commands.CommandTests) ... ok
test_shutdown (storageadmin.tests.test_commands.CommandTests) ... ok
test_update_check_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_update_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_uptime_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_utcnow_command (storageadmin.tests.test_commands.CommandTests) ... ok
test_get_requests (storageadmin.tests.test_dashboardconfig.DashboardConfigTests) ... ok
test_post_requests (storageadmin.tests.test_dashboardconfig.DashboardConfigTests) ... ok
test_put_requests (storageadmin.tests.test_dashboardconfig.DashboardConfigTests) ... ok
test_get (storageadmin.tests.test_disk_smart.DiskSmartTests) ... ok
test_post_reqeusts_1 (storageadmin.tests.test_disk_smart.DiskSmartTests) ... ok
test_post_requests_2 (storageadmin.tests.test_disk_smart.DiskSmartTests) ... ok
test_delete_requests (storageadmin.tests.test_email_client.EmailTests) ... ok
test_get (storageadmin.tests.test_email_client.EmailTests) ... ok
test_post_requests_1 (storageadmin.tests.test_email_client.EmailTests) ... ok
test_post_requests_2 (storageadmin.tests.test_email_client.EmailTests) ... ok
test_post_requests (storageadmin.tests.test_login.LoginTests) ... ok
test_adv_nfs_get (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_adv_nfs_post_requests (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_delete_requests (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_invalid_admin_host1 (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_invalid_admin_host2 (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_invalid_get (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_post_requests (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_put_requests (storageadmin.tests.test_nfs_export.NFSExportTests) ... ok
test_get (storageadmin.tests.test_oauth_app.OauthAppTests) ... ok
test_get (storageadmin.tests.test_pool_balance.PoolBalanceTests) ... ok
test_post_requests_1 (storageadmin.tests.test_pool_balance.PoolBalanceTests) ... ok
test_post_requests_2 (storageadmin.tests.test_pool_balance.PoolBalanceTests) ... ok
test_get (storageadmin.tests.test_pool_scrub.PoolScrubTests) ... ok
test_post_requests_1 (storageadmin.tests.test_pool_scrub.PoolScrubTests) ... ok
test_post_requests_2 (storageadmin.tests.test_pool_scrub.PoolScrubTests) ... ok
test_compression (storageadmin.tests.test_pools.PoolTests) ... ok
test_delete_pool_with_share (storageadmin.tests.test_pools.PoolTests) ... ok
test_get (storageadmin.tests.test_pools.PoolTests) ... ok
test_invalid_requests_1 (storageadmin.tests.test_pools.PoolTests) ... ok
test_invalid_requests_2 (storageadmin.tests.test_pools.PoolTests) ... ok
test_invalid_root_pool_edits (storageadmin.tests.test_pools.PoolTests) ... ok
test_mount_options (storageadmin.tests.test_pools.PoolTests) ... ok
test_name_regex (storageadmin.tests.test_pools.PoolTests) ... ok
test_raid0_crud (storageadmin.tests.test_pools.PoolTests) ... ok
test_raid10_crud (storageadmin.tests.test_pools.PoolTests) ... ok
test_raid1_crud (storageadmin.tests.test_pools.PoolTests) ... ok
test_raid5_crud (storageadmin.tests.test_pools.PoolTests) ... ok
test_raid6_crud (storageadmin.tests.test_pools.PoolTests) ... ok
test_single_crud (storageadmin.tests.test_pools.PoolTests) ... ok
test_delete_requests_1 (storageadmin.tests.test_sftp.SFTPTests) ... ok
test_delete_requests_2 (storageadmin.tests.test_sftp.SFTPTests) ... ok
test_get (storageadmin.tests.test_sftp.SFTPTests) ... ok
test_post_requests_1 (storageadmin.tests.test_sftp.SFTPTests) ... ok
test_post_requests_2 (storageadmin.tests.test_sftp.SFTPTests) ... ok
test_clone_command (storageadmin.tests.test_share_commands.ShareCommandTests) ... ok
test_rollback_command (storageadmin.tests.test_share_commands.ShareCommandTests) ... ok
test_compression (storageadmin.tests.test_shares.ShareTests) ... ok
test_create (storageadmin.tests.test_shares.ShareTests) ... ok
test_delete2 (storageadmin.tests.test_shares.ShareTests) ... ok
test_delete3 (storageadmin.tests.test_shares.ShareTests) ... ok
test_delete_set1 (storageadmin.tests.test_shares.ShareTests) ... ok
test_delete_share_with_snapshot (storageadmin.tests.test_shares.ShareTests) ... ok
test_get (storageadmin.tests.test_shares.ShareTests) ... ok
test_name_regex (storageadmin.tests.test_shares.ShareTests) ... ok
test_resize (storageadmin.tests.test_shares.ShareTests) ... ok
test_clone_command (storageadmin.tests.test_snapshot.SnapshotTests) ... ok
test_delete_requests (storageadmin.tests.test_snapshot.SnapshotTests) ... ok
test_get (storageadmin.tests.test_snapshot.SnapshotTests) ... ok
test_post_requests_1 (storageadmin.tests.test_snapshot.SnapshotTests) ... ok
test_post_requests_2 (storageadmin.tests.test_snapshot.SnapshotTests) ... ok
test_get (storageadmin.tests.test_tls_certificate.TlscertificateTests) ... ok
test_post_requests (storageadmin.tests.test_tls_certificate.TlscertificateTests) ... ok
test_get (storageadmin.tests.test_update_subscription.UpdateSubscriptionTests) ... ok
test_post_requests (storageadmin.tests.test_update_subscription.UpdateSubscriptionTests) ... ok
test_delete (storageadmin.tests.test_network.NetworkTests) ... ok
test_get_base (storageadmin.tests.test_network.NetworkTests) ... ok
test_nclistview_post_devices (storageadmin.tests.test_network.NetworkTests) ... ok
test_nclistview_post_devices_not_list (storageadmin.tests.test_network.NetworkTests) ... ok
test_nclistview_post_invalid (storageadmin.tests.test_network.NetworkTests) ... ok
test_put (storageadmin.tests.test_network.NetworkTests) ... ok
test_put_invalid_id (storageadmin.tests.test_network.NetworkTests) ... ok
test_snmp_0 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_0_1 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_1 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_2 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_3 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_4 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_5 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_6 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_snmp_7 (smart_manager.tests.test_snmp.SNMPTests) ... ok
test_delete_invalid (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_delete_valid (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_get (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_post_invalid_type (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_post_name_exists (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_post_valid (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_put_invalid (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_put_valid (smart_manager.tests.test_task_scheduler.TaskSchedulerTests) ... ok
test_balance_status_cancel_requested (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_finished (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_in_progress (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_pause_requested (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_paused (fs.tests.test_btrfs.BTRFSTests)
Test to see if balance_status() correctly identifies a Paused balance ... ok
test_balance_status_unknown_parsing (fs.tests.test_btrfs.BTRFSTests) ... ok
test_balance_status_unknown_unmounted (fs.tests.test_btrfs.BTRFSTests) ... ok
test_default_subvol (fs.tests.test_btrfs.BTRFSTests) ... ok
test_degraded_pools_found (fs.tests.test_btrfs.BTRFSTests) ... ok
test_dev_stats_zero (fs.tests.test_btrfs.BTRFSTests) ... ok
test_device_scan_all (fs.tests.test_btrfs.BTRFSTests) ... ok
test_device_scan_parameter (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_dev_io_error_stats (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_pool_raid_levels_identification (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_property_all (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_property_compression (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_property_ro (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_snap_2 (fs.tests.test_btrfs.BTRFSTests) ... ok
test_get_snap_legacy (fs.tests.test_btrfs.BTRFSTests) ... ok
test_is_subvol_exists (fs.tests.test_btrfs.BTRFSTests) ... ok
test_is_subvol_nonexistent (fs.tests.test_btrfs.BTRFSTests) ... ok
test_parse_snap_details (fs.tests.test_btrfs.BTRFSTests) ... ok
test_scrub_status_cancelled (fs.tests.test_btrfs.BTRFSTests) ... ok
test_scrub_status_conn_reset (fs.tests.test_btrfs.BTRFSTests) ... ok
test_scrub_status_finished (fs.tests.test_btrfs.BTRFSTests) ... ok
test_scrub_status_halted (fs.tests.test_btrfs.BTRFSTests) ... ok
test_scrub_status_running (fs.tests.test_btrfs.BTRFSTests) ... ok
test_share_id (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_legacy_system_pool_fresh (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_legacy_system_pool_used (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_system_pool_boot_to_snapshot_root_user_share (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_system_pool_post_btrfs_subvol_list_path_changes (fs.tests.test_btrfs.BTRFSTests) ... ok
test_shares_info_system_pool_used (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_home_rollback (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_home_rollback_snap (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_mid_replication (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_no_snaps (fs.tests.test_btrfs.BTRFSTests) ... ok
test_snapshot_idmap_snapper_root (fs.tests.test_btrfs.BTRFSTests) ... ok
test_volume_usage (fs.tests.test_btrfs.BTRFSTests) ... ok
test_pkg_changelog (system.tests.test_pkg_mgmt.SystemPackageTests) ... ok
test_pkg_latest_available (system.tests.test_pkg_mgmt.SystemPackageTests) ... ok
test_pkg_update_check (system.tests.test_pkg_mgmt.SystemPackageTests) ... ok
test_rpm_build_info (system.tests.test_pkg_mgmt.SystemPackageTests) ... ok
test_zypper_repos_list (system.tests.test_pkg_mgmt.SystemPackageTests) ... ok
test_get_byid_name_map (system.tests.test_osi.OSITests) ... ok
test_get_byid_name_map_prior_command_mock (system.tests.test_osi.OSITests) ... ok
test_get_dev_byid_name (system.tests.test_osi.OSITests) ... ok
test_get_dev_byid_name_no_devlinks (system.tests.test_osi.OSITests) ... ok
test_get_dev_byid_name_node_not_found (system.tests.test_osi.OSITests) ... ok
test_scan_disks_27_plus_disks_regression_issue (system.tests.test_osi.OSITests) ... ok
test_scan_disks_btrfs_in_partition (system.tests.test_osi.OSITests) ... ok
test_scan_disks_dell_perk_h710_md1220_36_disks (system.tests.test_osi.OSITests) ... ok
test_scan_disks_intel_bios_raid_data_disk (system.tests.test_osi.OSITests) ... ok
test_scan_disks_intel_bios_raid_sys_disk (system.tests.test_osi.OSITests) ... ok
test_scan_disks_luks_on_bcache (system.tests.test_osi.OSITests) ... ok
test_scan_disks_luks_sys_disk (system.tests.test_osi.OSITests) ... ok
test_scan_disks_mdraid_sys_disk (system.tests.test_osi.OSITests) ... ok
test_scan_disks_nvme_sys_disk (system.tests.test_osi.OSITests) ... ok
test_get_con_config (system.tests.test_system_network.SystemNetworkTests) ... ok
test_get_con_config_con_not_found (system.tests.test_system_network.SystemNetworkTests) ... ok
test_get_con_config_exception (system.tests.test_system_network.SystemNetworkTests) ... ok
test_get_dev_config (system.tests.test_system_network.SystemNetworkTests) ... ok
test_get_dev_config_dev_not_found (system.tests.test_system_network.SystemNetworkTests) ... ok
test_get_dev_config_exception (system.tests.test_system_network.SystemNetworkTests) ... ok

======================================================================
ERROR: test_post_requests_2 (storageadmin.tests.test_samba.SambaTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/mock.py", line 1201, in patched
    return func(*args, **keywargs)
  File "/opt/build/src/rockstor/storageadmin/tests/test_samba.py", line 421, in test_post_requests_2
    response = self.client.post(self.BASE_URL, data=data)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/test.py", line 168, in post
    path, data=data, format=format, content_type=content_type, **extra)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/test.py", line 90, in post
    return self.generic('POST', path, data, content_type, **extra)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/compat.py", line 222, in generic
    return self.request(**r)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/test.py", line 157, in request
    return super(APIClient, self).request(**kwargs)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/test.py", line 109, in request
    request = super(APIRequestFactory, self).request(**kwargs)
  File "/usr/local/lib/python2.7/site-packages/django/test/client.py", line 466, in request
    six.reraise(*exc_info)
  File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 132, in get_response
    response = wrapped_callback(request, *callback_args, **callback_kwargs)
  File "/usr/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 58, in wrapped_view
    return view_func(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py", line 71, in view
    return self.dispatch(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/views.py", line 452, in dispatch
    response = self.handle_exception(exc)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/views.py", line 449, in dispatch
    response = handler(request, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/django/utils/decorators.py", line 145, in inner
    return func(*args, **kwargs)
  File "/opt/build/src/rockstor/storageadmin/views/samba.py", line 157, in post
    return Response(SambaShareSerializer(smb_share).data)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/serializers.py", line 466, in data
    ret = super(Serializer, self).data
  File "/usr/local/lib/python2.7/site-packages/rest_framework/serializers.py", line 213, in data
    self._data = self.to_representation(self.instance)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/serializers.py", line 435, in to_representation
    ret[field.field_name] = field.to_representation(attribute)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/serializers.py", line 568, in to_representation
    self.child.to_representation(item) for item in iterable
  File "/usr/local/lib/python2.7/site-packages/rest_framework/serializers.py", line 426, in to_representation
    attribute = field.get_attribute(instance)
  File "/usr/local/lib/python2.7/site-packages/rest_framework/fields.py", line 316, in get_attribute
    raise type(exc)(msg)
KeyError: u"Got KeyError when attempting to get a value for field `groupname` on serializer `SUserSerializer`.\nThe serializer field might be named incorrectly and not match any attribute or key on the `User` instance.\nOriginal exception text was: 'getgrgid(): gid not found: 1'."

======================================================================
FAIL: test_post_requests (storageadmin.tests.test_group.GroupTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/build/src/rockstor/storageadmin/tests/test_group.py", line 99, in test_post_requests
    msg=response.data)
AssertionError: {'admin': True, 'groupname': u'ngroup2', 'gid': 1, u'id': 1}

======================================================================
FAIL: test_delete_requests (storageadmin.tests.test_user.UserTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/build/src/rockstor/storageadmin/tests/test_user.py", line 396, in test_delete_requests
    status.HTTP_200_OK, msg=response.data)
AssertionError: ['User (games) does not exist.', 'None\n']

======================================================================
FAIL: test_post_requests (storageadmin.tests.test_user.UserTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/build/src/rockstor/storageadmin/tests/test_user.py", line 175, in test_post_requests
    msg=response.data)
AssertionError: {'username': u'newUser', 'public_key': None, 'shell': u'/bin/bash', 'group': 3, 'pincard_allowed': 'no', 'admin': True, 'managed_user': True, 'homedir': u'/home/newUser', 'email': None, 'groupname': u'admin', 'gid': 5, 'user': 38, 'uid': 3, 'smb_shares': [], u'id': 2, 'has_pincard': False}

======================================================================
FAIL: test_disable_auto_update_command (storageadmin.tests.test_commands.CommandTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/build/src/rockstor/storageadmin/tests/test_commands.py", line 152, in test_disable_auto_update_command
    status.HTTP_200_OK, msg=response.data)
AssertionError: ["Failed to disable auto update due to this exception:  ([Errno 2] No such file or directory: '/etc/yum/yum-cron.conf').", 'Traceback (most recent call last):\n  File "/opt/build/src/rockstor/storageadmin/views/command.py", line 392, in post\n    auto_update(enable=False)\n  File "/opt/build/src/rockstor/system/pkg_mgmt.py", line 61, in auto_update\n    with open(YCFILE) as ifo, open(npath, "w") as tfo:\nIOError: [Errno 2] No such file or directory: \'/etc/yum/yum-cron.conf\'\n']

======================================================================
FAIL: test_enable_auto_update_command (storageadmin.tests.test_commands.CommandTests)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/opt/build/src/rockstor/storageadmin/tests/test_commands.py", line 144, in test_enable_auto_update_command
    status.HTTP_200_OK, msg=response.data)
AssertionError: ["Failed to enable auto update due to this exception: ([Errno 2] No such file or directory: '/etc/yum/yum-cron.conf').", 'Traceback (most recent call last):\n  File "/opt/build/src/rockstor/storageadmin/views/command.py", line 382, in post\n    auto_update(enable=True)\n  File "/opt/build/src/rockstor/system/pkg_mgmt.py", line 61, in auto_update\n    with open(YCFILE) as ifo, open(npath, "w") as tfo:\nIOError: [Errno 2] No such file or directory: \'/etc/yum/yum-cron.conf\'\n']

----------------------------------------------------------------------
Ran 212 tests in 58.200s

FAILED (failures=5, errors=1)
Destroying test database for alias 'default' ('test_storageadmin')...
Destroying test database for alias 'smart_manager' ('test_smartdb')...

@FroggyFlox
Copy link
Member Author

@phillxnet , I just realized I lost the attribution of your work in this PR after squashing all commits together... I'm sorry for not noticing this before submitting it. Let me know if you can think of a way to correct that.

@phillxnet
Copy link
Member

@FroggyFlox Re:

I just realized I lost the attribution
No worries at all. You gave rather generous mention to me in the pr text which was nice for what little I contributed to this this code :).

This is a fantastic addition by the way and super exited to finally have this lined up ready for review, and thank for keeping it maintained in the background as we readied our Rocsktor 4 offering. I'll get to the review of this shortly hopefully and we can then pop it in the testing channel for a bit to make sure all is OK once it's out in the wild as it were.

@FroggyFlox
Copy link
Member Author

Thanks a lot, @phillxnet ...
No worries on the timing, I just wanted to get that PR submitted so that it is out and ready to be looked at once the time is right for it.

Cheers!

@phillxnet
Copy link
Member

@FroggyFlox As discussed side channel, I'm intending to use this pr as the first entrant to the post "Build on openSUSE' Stable channel release (currently in final Release Candidate phase) so we can at least 'field' it a little in a short testing run before it's consequent rpm builds are also, in turn, promoted to the Stable channel if all looks to be OK in the wild. This way we can get this into Stable fairly quickly to leave the testing channel free for our pending, longer term/task, technical debt Django/Python/etc updates which will inevitably shake things up as we move towards the subsequent stable releases there after.

@FroggyFlox
Copy link
Member Author

Thanks a lot for the consideration, @phillxnet !

I too am looking forward to seeing feedback from users on the new features introduced by this PR as this contains quite a bit of novelties. I already have a branch for related documentation updates (https://github.com/FroggyFlox/rockstor-doc/tree/WIP_Docker_networking), but I don't think it should get in the docs before this PR gets to Stable...
Maybe I'll try to write some sort of how-to on the forum with some example of what can be done with it to encourage users to give it a try and thus encourage feedback.

@phillxnet
Copy link
Member

I am currently reviewing this pr and so far so good, as usual with @FroggyFlox pr's :). I had previously stated:

I'm intending to use this pr as the first entrant to the post "Build on openSUSE' Stable channel release (currently in final Release Candidate phase) so we can at least 'field' it a little in a short testing run before it's consequent rpm builds are also, in turn, promoted to the Stable channel if all looks to be OK in the wild.

As-is, we are on RC 5 (ver 4.0.4) rpms in our "Build on openSUSE" endeavour and due to an issue / over site on my part we had to populate the stable channel with this same 4.0.4 rpm version. As such we can now release an rpm with this significant code change into testing only, for a bit, and preserve our current stable / testing channel: if only as RC5 in stable and RC6 in testing.

Just keen to get this into field testing so we can establish it as the 'new norm' for our Rock-ons infrastructure ready for the final stable release once we are done with the Release Candidates that currently populate both testing and stable channels.

@FroggyFlox Apologies for the massive delay on this one. But at least we can now hopefully get this in ready for our final 4 release which will be nice. Especially given the massive amount of preparation you've put in.

Currently, as we've discussed side channel already, I've only notice the one unit test failing due to the assumption of a docker deamon running (I think). This is no show stopper so I'll continue and hopefully get this merged as given you've had to re-base multiple times already I really need to get my merging/release act together.

Thanks for taking the time and effort to be so detailed and fastidious in the preperation of both this code contribution and it's presentation within this pr. Helps massively for my limited cognitive ability :) .

@phillxnet
Copy link
Member

This is a nice re-use of an existing failed/legacy mechanism. Well done.

  • Attempting to create a docker network with either one of these names would fail,

Nice.

As a result, we no longer throw errors in the logs: unknown ctype: bridge.

Another nice.

  • The Networking button in the post-install customization window is disabled for rock-ons using "host" networking

Well done. We really need to flag these host networking Rock-ons in the next testing run post getting the Stable out.

@phillxnet
Copy link
Member

phillxnet commented Dec 21, 2020

@FroggyFlox
If I create docknet1, edit it but make no changes (a couple of times) then install and start netdata then spanner add it to rocknet1 (after turning off) I get the following:

21/Dec/2020 18:28:09] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called
[21/Dec/2020 18:28:09] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called
[21/Dec/2020 18:28:09] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called
[21/Dec/2020 18:28:39] DEBUG [system.osi:157] Running command: /usr/bin/docker start netdata
[21/Dec/2020 18:28:39] DEBUG [system.docker:244] the network docknet1 was detected, so do NOT create it.
[21/Dec/2020 18:28:40] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect docknet1 netdata
[21/Dec/2020 18:28:40] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'network', 'connect', 'docknet1', 'netdata']. output: [''] error: ['Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network', '']
[21/Dec/2020 18:28:40] DEBUG [storageadmin.views.rockon_helpers:147] Exception while live-updating the rock-on (RockOn object)
[21/Dec/2020 18:28:40] ERROR [storageadmin.views.rockon_helpers:149] Error running a command. cmd = /usr/bin/docker network connect docknet1 netdata. rc = 1. stdout = ['']. stderr = ['Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network', '']
Traceback (most recent call last):
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/rockon_helpers.py", line 144, in update
    dnet_create_connect(rockon)
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/rockon_helpers.py", line 308, in dnet_create_connect
    container=cno.container.name, network=cno.connection.docker_name
  File "/opt/rockstor-dev/src/rockstor/system/docker.py", line 260, in dnet_connect
    run_command(list(DNET) + ["connect", network, container,], log=True)
  File "/opt/rockstor-dev/src/rockstor/system/osi.py", line 176, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/docker network connect docknet1 netdata. rc = 1. stdout = ['']. stderr = ['Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network', '']
[21/Dec/2020 18:28:40] DEBUG [storageadmin.views.rockon_helpers:155] Update rockon (Netdata) state to: install_failed (rockons/10/state_update)

I'm assuming currently that I'm doing it wrong? Apologies but still getting to grips with this docker networking thing. Or do we have some kind of race condition with this 'path'. Again not a show stopper but noting just in case this is a corner case or some king of unintended use.

@phillxnet
Copy link
Member

@FroggyFlox The resulting system then shows no Rock-ons installed. But we have:

docker container list
CONTAINER ID        IMAGE                      COMMAND             CREATED             STATUS              PORTS               NAMES
df0b22e9def9        titpetric/netdata:latest   "/run.sh"           2 hours ago         Up 16 minutes                           netdata

So it looks like the above use/miss-use throws our db reflection of state out.

@FroggyFlox
Copy link
Member Author

'Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network'

Mmm... curious... This is the expected error when trying to connect a docker container to a docker network if the container is using host networking. This is actually exactly why I tried to implement an exception and disable networking edits for this kind of rock-ons. I'll have to see later tonight why it wasn't detected as such, though.

Thanks for finding that one.

@FroggyFlox
Copy link
Member Author

FroggyFlox commented Dec 21, 2020

By curiosity, what does the following return?

docker ps -a --filter network=host

@phillxnet
Copy link
Member

This is the expected error when trying to connect a docker container to a docker network if the container is using host networking.

OK, so at least it's understood by at least one of us :). Thanks for such a quick response. Again not a show stopper for testing channel just wanted to note as I went along in case I stumble across something. Netdata does use the -host networking. If it's just that then it's likely a simple bug. I'll proceed on that assumption as there is likely to be one or two in this amount of code addtion.

I think, as we are re-installing (removing and re-installing), and get an exception, we end up with a not installed state.
We might want some kind of finally clause to return the state to something sane if this is possible.
Re-installing the same rock-on netdata in this case. Fails as it tries to do the same thing, our db still holding the request to add this rock-on to the suspect rocknet1.

[21/Dec/2020 18:50:38] DEBUG [system.osi:157] Running command: /usr/bin/docker stop netdata
[21/Dec/2020 18:50:40] DEBUG [system.osi:157] Running command: /usr/bin/docker rm netdata
[21/Dec/2020 18:50:40] DEBUG [storageadmin.views.rockon_helpers:78] Attempted to remove a container (netdata). Out: ['netdata', ''] Err: [''] rc: 0.
[21/Dec/2020 18:50:40] DEBUG [system.osi:157] Running command: /usr/bin/docker pull titpetric/netdata:latest
[21/Dec/2020 18:50:42] DEBUG [system.osi:157] Running command: /usr/bin/docker run -d --restart=unless-stopped --name netdata -v /mnt2/netstat-config:/etc/netdata/override -v /etc/localtime:/etc/localtime:ro -p 19999:19999/tcp -p 19999:19999/udp -v /sys:/host/sys:ro -v /proc:/host/proc:ro -v /var/run/docker.sock:/var/run/docker.sock --net=host --cap-add=SYS_PTRACE titpetric/netdata:latest
[21/Dec/2020 18:50:43] DEBUG [system.docker:244] the network docknet1 was detected, so do NOT create it.
[21/Dec/2020 18:50:43] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect docknet1 netdata
[21/Dec/2020 18:50:43] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'network', 'connect', 'docknet1', 'netdata']. output: [''] error: ['Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network', '']
[21/Dec/2020 18:50:43] DEBUG [storageadmin.views.rockon_helpers:173] Exception while installing the Rockon (10).
[21/Dec/2020 18:50:43] ERROR [storageadmin.views.rockon_helpers:174] Error running a command. cmd = /usr/bin/docker network connect docknet1 netdata. rc = 1. stdout = ['']. stderr = ['Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network', '']
Traceback (most recent call last):
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/rockon_helpers.py", line 171, in install
    globals().get("%s_install" % rockon.name.lower(), generic_install)(rockon)
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/rockon_helpers.py", line 341, in generic_install
    dnet_create_connect(rockon)
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/rockon_helpers.py", line 308, in dnet_create_connect
    container=cno.container.name, network=cno.connection.docker_name
  File "/opt/rockstor-dev/src/rockstor/system/docker.py", line 260, in dnet_connect
    run_command(list(DNET) + ["connect", network, container,], log=True)
  File "/opt/rockstor-dev/src/rockstor/system/osi.py", line 176, in run_command
    raise CommandException(cmd, out, err, rc)
CommandException: Error running a command. cmd = /usr/bin/docker network connect docknet1 netdata. rc = 1. stdout = ['']. stderr = ['Error response from daemon: container sharing network namespace with another container or host cannot be connected to any other network', '']
[21/Dec/2020 18:50:43] DEBUG [storageadmin.views.rockon_helpers:177] Set rock-on 10 state to install_failed

@phillxnet
Copy link
Member

By curiosity, what does the following return?

docker ps -a --filter network=host
rleap15-2:~ # docker ps -a --filter network=host
CONTAINER ID        IMAGE                      COMMAND             CREATED             STATUS              PORTS               NAMES
b536989332ca        titpetric/netdata:latest   "/run.sh"           11 minutes ago      Up 11 minutes                           netdata

But still not showing in the Web-UI. Attempting a work around for now so we can know how to get folks out of this as still keen to get this merged and into the wild for proper testing.

@FroggyFlox
Copy link
Member Author

Thanks a lot for all the feedback... That's the correct output but somehow we're not catching/parsing that correctly :-(

I'll have a look when I can and try to implement your feedback on how to deal with this state.

For a workaround, here I think we "simply" need to run our delete-rockon script, which should reset all of that after an "Update" from the webUI.

@phillxnet
Copy link
Member

phillxnet commented Dec 21, 2020

System - Network: delete the previously added to netdata "rocknet1"
Re-install the already running netdata and all is well.
So we have a fairly obvious and now documented here work around.
This is therefor not a show stopper as I see it. And we can open a bug once it's narrow down. And from what you say it's likely a buggy or missing block on adding a rocknet via installed but stopped Rock-on spanner route.
Edit: + where that rock-on is using -host networking.

@phillxnet
Copy link
Member

Confirmed auto created rocknets via your test rock-on (thanks for that):
auto-created-rocknets

[21/Dec/2020 19:16:44] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'stop', 'alpineA']. output: [''] error: ['Error response from daemon: No such container: alpineA', '']
[21/Dec/2020 19:16:44] DEBUG [system.osi:157] Running command: /usr/bin/docker rm alpineA
[21/Dec/2020 19:16:44] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'rm', 'alpineA']. output: [''] error: ['Error: No such container: alpineA', '']
[21/Dec/2020 19:16:44] DEBUG [storageadmin.views.rockon_helpers:78] Attempted to remove a container (alpineA). Out: [''] Err: ['Error: No such container: alpineA', ''] rc: 1.
[21/Dec/2020 19:16:44] DEBUG [system.osi:157] Running command: /usr/bin/docker pull alpine:latest
...
[21/Dec/2020 19:16:48] DEBUG [system.osi:157] Running command: /usr/bin/docker run -d --restart=unless-stopped --name alpineA -v /etc/localtime:/etc/localtime:ro -it alpine:latest ash
[21/Dec/2020 19:16:49] DEBUG [system.osi:157] Running command: /usr/bin/docker stop alpineB
[21/Dec/2020 19:16:49] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'stop', 'alpineB']. output: [''] error: ['Error response from daemon: No such container: alpineB', '']
[21/Dec/2020 19:16:49] DEBUG [system.osi:157] Running command: /usr/bin/docker rm alpineB
[21/Dec/2020 19:16:49] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'rm', 'alpineB']. output: [''] error: ['Error: No such container: alpineB', '']
[21/Dec/2020 19:16:49] DEBUG [storageadmin.views.rockon_helpers:78] Attempted to remove a container (alpineB). Out: [''] Err: ['Error: No such container: alpineB', ''] rc: 1.
[21/Dec/2020 19:16:49] DEBUG [system.osi:157] Running command: /usr/bin/docker pull alpine:latest
[21/Dec/2020 19:16:51] DEBUG [system.osi:157] Running command: /usr/bin/docker run -d --restart=unless-stopped --name alpineB -v /etc/localtime:/etc/localtime:ro -it alpine:latest ash
[21/Dec/2020 19:16:52] DEBUG [system.osi:157] Running command: /usr/bin/docker stop alpineC
[21/Dec/2020 19:16:52] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'stop', 'alpineC']. output: [''] error: ['Error response from daemon: No such container: alpineC', '']
[21/Dec/2020 19:16:52] DEBUG [system.osi:157] Running command: /usr/bin/docker rm alpineC
[21/Dec/2020 19:16:52] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'rm', 'alpineC']. output: [''] error: ['Error: No such container: alpineC', '']
[21/Dec/2020 19:16:52] DEBUG [storageadmin.views.rockon_helpers:78] Attempted to remove a container (alpineC). Out: [''] Err: ['Error: No such container: alpineC', ''] rc: 1.
[21/Dec/2020 19:16:52] DEBUG [system.osi:157] Running command: /usr/bin/docker pull alpine:latest
[21/Dec/2020 19:16:55] DEBUG [system.osi:157] Running command: /usr/bin/docker run -d --restart=unless-stopped --name alpineC -v /etc/localtime:/etc/localtime:ro -it alpine:latest ash
[21/Dec/2020 19:16:56] DEBUG [system.osi:157] Running command: /usr/bin/docker stop alpineD
[21/Dec/2020 19:16:56] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'stop', 'alpineD']. output: [''] error: ['Error response from daemon: No such container: alpineD', '']
[21/Dec/2020 19:16:56] DEBUG [system.osi:157] Running command: /usr/bin/docker rm alpineD
[21/Dec/2020 19:16:56] ERROR [system.osi:174] non-zero code(1) returned by command: ['/usr/bin/docker', 'rm', 'alpineD']. output: [''] error: ['Error: No such container: alpineD', '']
[21/Dec/2020 19:16:56] DEBUG [storageadmin.views.rockon_helpers:78] Attempted to remove a container (alpineD). Out: [''] Err: ['Error: No such container: alpineD', ''] rc: 1.
[21/Dec/2020 19:16:56] DEBUG [system.osi:157] Running command: /usr/bin/docker pull alpine:latest
[21/Dec/2020 19:16:58] DEBUG [system.osi:157] Running command: /usr/bin/docker run -d --restart=unless-stopped --name alpineD -v /etc/localtime:/etc/localtime:ro -it alpine:latest ash
[21/Dec/2020 19:16:59] DEBUG [system.docker:191] the network alpineA-B was NOT detected, so create it now.
[21/Dec/2020 19:16:59] DEBUG [system.osi:157] Running command: /usr/bin/docker network create alpineA-B
[21/Dec/2020 19:17:00] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect alpineA-B alpineA
[21/Dec/2020 19:17:00] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect alpineA-B alpineB
[21/Dec/2020 19:17:01] DEBUG [system.docker:191] the network alpineA-C was NOT detected, so create it now.
[21/Dec/2020 19:17:01] DEBUG [system.osi:157] Running command: /usr/bin/docker network create alpineA-C
[21/Dec/2020 19:17:01] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect alpineA-C alpineA
[21/Dec/2020 19:17:01] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect alpineA-C alpineC
[21/Dec/2020 19:17:02] DEBUG [system.docker:191] the network alpineC-D was NOT detected, so create it now.
[21/Dec/2020 19:17:02] DEBUG [system.osi:157] Running command: /usr/bin/docker network create alpineC-D
[21/Dec/2020 19:17:02] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect alpineC-D alpineC
[21/Dec/2020 19:17:03] DEBUG [system.osi:157] Running command: /usr/bin/docker network connect alpineC-D alpineD
[21/Dec/2020 19:17:03] DEBUG [storageadmin.views.rockon_helpers:177] Set rock-on 83 state to installed

@phillxnet
Copy link
Member

phillxnet commented Dec 21, 2020

But we then get the following:

[21/Dec/2020 19:18:28] ERROR [storageadmin.views.network:214] NetworkConnection matching query does not exist.
Traceback (most recent call last):
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/network.py", line 208, in update_connection
    name=dconfig["connection"]
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py", line 127, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 334, in get
    self.model._meta.object_name
DoesNotExist: NetworkConnection matching query does not exist.
[21/Dec/2020 19:18:28] ERROR [storageadmin.views.network:214] NetworkConnection matching query does not exist.
Traceback (most recent call last):
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/network.py", line 208, in update_connection
    name=dconfig["connection"]
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py", line 127, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 334, in get
    self.model._meta.object_name
DoesNotExist: NetworkConnection matching query does not exist.
[21/Dec/2020 19:18:28] ERROR [storageadmin.views.network:214] NetworkConnection matching query does not exist.
Traceback (most recent call last):
  File "/opt/rockstor-dev/src/rockstor/storageadmin/views/network.py", line 208, in update_connection
    name=dconfig["connection"]
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/manager.py", line 127, in manager_method
    return getattr(self.get_queryset(), name)(*args, **kwargs)
  File "/opt/rockstor-dev/eggs/Django-1.8.16-py2.7.egg/django/db/models/query.py", line 334, in get
    self.model._meta.object_name
DoesNotExist: NetworkConnection matching query does not exist.

@FroggyFlox is this our network db entries just being brought back in line with system state?

EDIT:
It's followed by:

[21/Dec/2020 19:18:28] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called
[21/Dec/2020 19:18:28] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called
[21/Dec/2020 19:18:28] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called
[21/Dec/2020 19:18:28] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called
[21/Dec/2020 19:18:28] DEBUG [storageadmin.views.network:83] The function _update_or_create_ctype has been called

@phillxnet
Copy link
Member

phillxnet commented Dec 21, 2020

@FroggyFlox OK, I'm pretty sure this is just an re-sync process as all looks to be functional still. And further refreshes don't exibit this. Also I can add a 'user' / custom rocknet just fine and it shows up as expected. This is such a significant added capability:

auto-created-rocknets-plus-user-added

The above also confirms the edit capability on the user rocknet and not on the auto added rock-on defined (required for function) rocknets. Yet another nice.

@FroggyFlox
Copy link
Member Author

OK, I'm pretty sure this is just an re-sync process as all looks to be functional still.

Yes, that is how the NetworkConnection model is refreshed: if a connection detected by nmcli is not found in the model, we see this error and then update the model accordingly. As a result, this occurs in the logs the very first time a new nmcli connection is detected. I agree it's a tad misleading as the "error" is part of the process, so this could be another one of these errors messages we could improve.

# In some case, DNET inspect does NOT return Gateway in Docker version 18.09.5, build e8ff056
# This is likely related to the following bug in which the 'Gateway' is not reported the first
# time the docker daemon is started. Upon reload of docker daemon, it IS correctly reported.
# https://github.com/moby/moby/issues/26799
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIce find. Thanks for documenting this in-code.

Copy link
Member

@phillxnet phillxnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FroggyFlox This is all super impressive and as per general comments to date looks to be functioning as expected: thanks for the ongoing comments. As you have explained we have a bug here or there but that's hardly surprising in this amount of code addition. But we can address them in smaller more manageable pull requests once we have this merged. I've mainly done a functional review given your greater expertise in this code area so give our discussions to date I'm going to merge this as-is so we can move to publishing in the testing channel first and address the niggles in time as we go along. It would be a terrible shame for this pr to code rot and I've taken so long reviewing it that there is a danger of this happening if we don't get it merged.

Thanks again for again raising our game.

As always much appreciated.

@phillxnet phillxnet merged commit 26be2da into rockstor:master Dec 22, 2020
@FroggyFlox FroggyFlox deleted the Issue1982_Implement_docker_networks_rebased branch January 6, 2021 13:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants