-
Notifications
You must be signed in to change notification settings - Fork 139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement docker networks. Fixes #1982. #2207
Implement docker networks. Fixes #1982. #2207
Conversation
Update test_system_network.py Black formatting Prevents the use of 'host', 'bridge', and 'null' as connection names. Update test_system_network.py following implementation of docker networking. Check if rock-on uses host network and disable network customization if true. move existing & proposed low level docker functions to system/docker.py Helps to remove the possibility of circular imports Move existing docker_status from rockon_helpers to system/docker Improves separation of concerns between system level docker and Rock-ons Establish 'sysnet' (system/network) import name space within views Display rocknet details in Network summary table. Use docker_name for rocknets in network dashboard widget: Get containers and corresponding rock-on name into the connection's docker_options. Get Rocknet(s) for rockons: - Implement Rocknet Backbone Model: RockOnNetwork and its Collection - Create API call to fetch data from DContainerNetwork through RockOnNetwork - Display currently-attached rocknet(s) in rockon summary table Allow new networks to be specified directly from the field. Changes to DPort model: - add publish field - add container_name property method Join Networks: - Switch to sequential fetching of Ports, Containers, and then Networks as the latter need to be stopping. - Display user networks as multi-select field per containers in rock-on Unpublish Ports UI: Add backend logic to skip port at docker run time if unpublished Disable UI button if UI-port is unpublished Implement UI to Add, Edit, and Delete docker networks. Disallow connection toggle and edit for docker networks. Display docker_name in network summary table. Fix for Container_links: Create and connect each linked container to a given network defined in the Rock-on JSON definition file.
Container linksSee related issue (#2003) for additional details and history. Database modificationsAs we're keeping the same Logic and implementation
This logic has been verified to work with a test rock-on (json file here). This rock-on runs 4 copies of the alpine image, with the following links, repasted below:
"Alpine_link_test_ABCD_2nets": {
"container_links": {
"alpineA": [
{
"name": "alpineA-B",
"source_container": "alpineB"
},
{
"name": "alpineA-C",
"source_container": "alpineC"
}
],
"alpineC": [
{
"name": "alpineC-D",
"source_container": "alpineD"
}
]
}, As expected, this rock-on installs successfully and creates three new networks: rockdev:~ # docker network ls
NETWORK ID NAME DRIVER SCOPE
987efc226622 alpineA-B bridge local
1d1719c45e5a alpineA-C bridge local
7a4164d5d66f alpineC-D bridge local
ddc23e495855 bridge bridge local
c4d12aa68f4d host host local
3e39b1129036 none null local Each network connects the correct containers: rockdev:~ # docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' alpineA-C
alpineCalpineA
rockdev:~ # docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' alpineA-B
alpineBalpineA
rockdev:~ # docker network inspect -f '{{range .Containers}}{{.Name}}{{end}}' alpineC-D
alpineCalpineD And each container can ping the correct ones: rockdev:~ # docker attach alpineA
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
PING alpineA (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.049 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.123 ms
--- alpineA ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.049/0.086/0.123 ms
PING alpineB (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.097 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.142 ms
--- alpineB ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.119/0.142 ms
PING alpineC (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.097 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.136 ms
--- alpineC ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.097/0.116/0.136 ms
ping: bad address 'alpineD' rockdev:~ # docker attach alpineB
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
PING alpineA (172.18.0.2): 56 data bytes
64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.085 ms
--- alpineA ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.085/0.090/0.096 ms
PING alpineB (172.18.0.3): 56 data bytes
64 bytes from 172.18.0.3: seq=0 ttl=64 time=0.039 ms
64 bytes from 172.18.0.3: seq=1 ttl=64 time=0.080 ms
--- alpineB ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.039/0.059/0.080 ms
ping: bad address 'alpineC'
ping: bad address 'alpineD' rockdev:~ # docker attach alpineC
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
PING alpineA (172.19.0.2): 56 data bytes
64 bytes from 172.19.0.2: seq=0 ttl=64 time=0.105 ms
64 bytes from 172.19.0.2: seq=1 ttl=64 time=0.141 ms
--- alpineA ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.105/0.123/0.141 ms
ping: bad address 'alpineB'
PING alpineC (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.039 ms
64 bytes from 172.19.0.3: seq=1 ttl=64 time=0.063 ms
--- alpineC ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.039/0.051/0.063 ms
PING alpineD (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.109 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.160 ms
--- alpineD ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.109/0.134/0.160 ms rockdev:~ # docker attach alpineD
/ # for C in 'alpineA' 'alpineB' 'alpineC' 'alpineD'; do ping -c 2 $C ; done
ping: bad address 'alpineA'
ping: bad address 'alpineB'
PING alpineC (172.20.0.2): 56 data bytes
64 bytes from 172.20.0.2: seq=0 ttl=64 time=0.075 ms
64 bytes from 172.20.0.2: seq=1 ttl=64 time=0.107 ms
--- alpineC ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.075/0.091/0.107 ms
PING alpineD (172.20.0.3): 56 data bytes
64 bytes from 172.20.0.3: seq=0 ttl=64 time=0.067 ms
64 bytes from 172.20.0.3: seq=1 ttl=64 time=0.113 ms
--- alpineD ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.067/0.090/0.113 ms After Rock-on uninstall, all networks are correctly deleted: rockdev:~ # docker network ls
NETWORK ID NAME DRIVER SCOPE
ddc23e495855 bridge bridge local
c4d12aa68f4d host host local
3e39b1129036 none null local Finally, the resulting docker networks created are surfaced to the user in the "System" > "Networks" page: As mentioned above, these docker networks created by a rock-on definition file are deemed critical to the rock-on function, so we do not allow the user to edit or delete them (the respective icons are not displayed), similar to the default |
Create a rocknetSee dedicated issue (#2009) for additional details and history. In order to follow the same level of customization than what is offered for system connections, we keep the same configuration method set as "Auto" by default, with the possibility of selecting "Manual" parameters. In the latter case, docker specific fields will appear, corresponding to the options offered by the docker network create command. As per the docker documentation, these are as follows:
In our case, we only support the bridge driver (to begin with, at least), which leaves us with the following parameters:
To continue with consistency, one can also edit an existing rocknet. Upon edit, the network is disconnected from any attached containers, deleted, and re-created using new settings, before being reconnected to all the containers that were attached to the original network. @phillxnet: note here that I have found easy to fill in incompatible parameters for a rocknet in case one wants to manually set them. As a result, I have included a safety here to first "write down" the existing settings of a rocknet before attempting to re-create it with new settings. If the latter fails, we re-create and re-connect the rocknet as it was before (thanks to its original settings having been "written down"). Finally, one can also easily delete a rocknet using the trash icon, in which case the attached containers are first disconnected. There are also a few UI elements of interest:
Database modificationsThis section of the PR requires the creation of two new models in the storageadmin database:
|
Unpublish port and connect to rocknetsSee dedicated issue (#2013) for additional details and history. (Un)publish portsClicking on this Networking button leads to the following new page: As all docker networking are applied at the container level, we surface all options below for each container:
Of note, if the UI port is unpublished, the "webUI" access button for the given rock-on is greyed out (and disabled) accompanied by a tooltip (on mouse hover) explaining that the button was disabled due to the fact that the corresponding UI port is currently unpublished. See screenshot below for illustration: Rocknets connections
In the example below, for instance, both containers of the rock-on will be connected to the rocknet After submission and update of the rock-on, we can verify they are connected to the correct rocknets: rockdev:~ # docker inspect rocknet01
(...)
"Containers": {
"8054e4b0d41a8a09db25a8b228de228cdd372acda4c5885ca0b3e68a266304b4": {
"Name": "alpine2p1",
"EndpointID": "dbbecfe3306e3ba6f6889efb455c421802e2f83c9051f553655bff807186de0a",
"MacAddress": "02:42:ac:18:00:02",
"IPv4Address": "172.24.0.2/16",
"IPv6Address": ""
},
"8dd41e44d5a01272334fd0c2aa59f66f16f3dfbf20fd5bf0e1196e97e3792d28": {
"Name": "alpine2p2",
"EndpointID": "04b8dad378e4e9a61a6d1c0e9cccd1cfb49d48cd785a8f01c321ba350c9bc323",
"MacAddress": "02:42:ac:18:00:03",
"IPv4Address": "172.24.0.3/16",
"IPv6Address": ""
}
},
(...) rockdev:~ # docker inspect rocknet02
(...)
"Containers": {
"8dd41e44d5a01272334fd0c2aa59f66f16f3dfbf20fd5bf0e1196e97e3792d28": {
"Name": "alpine2p2",
"EndpointID": "467e37ab7403f29c9a8c322c66cc4da5be58bc5ad5bc21510a567419e6161106",
"MacAddress": "02:42:ac:19:00:02",
"IPv4Address": "172.25.0.2/16",
"IPv6Address": ""
}
},
(...) Several points of interest:
Database modificationsThe newly-added publish/unpublish option needs to be stored in the API changesA new storageadmin view was created: |
MiscellaneousAs implemented docker networks all use the bridge driver, we are now dealing with/parsing network connections. As a result, we no longer throw errors in the logs: As discussed in my fork, many docker-related base functions are proposed to be moved to Unit tests in In the section dedicated to join rocknet(s) to container(s), I've included a help icon (question mark) intended to link to the corresponding section of Rockstor's documentation. As such section does not yet exist, it points only to the documentation's main page at the moment and would thus need to be updated once that is implemented. Database migrationAs described above, several changes were made to storageadmin, including edits to existing models, as well as addition of new models. A migration was thus created as follows...
... and applied successfully:
TestingLeap 15.2 (ISO install): Ran 212 tests in 58.200s
FAILED (failures=5, errors=1) Full Testing OutputsLeap15.2 (ISO install)
|
@phillxnet , I just realized I lost the attribution of your work in this PR after squashing all commits together... I'm sorry for not noticing this before submitting it. Let me know if you can think of a way to correct that. |
@FroggyFlox Re:
This is a fantastic addition by the way and super exited to finally have this lined up ready for review, and thank for keeping it maintained in the background as we readied our Rocsktor 4 offering. I'll get to the review of this shortly hopefully and we can then pop it in the testing channel for a bit to make sure all is OK once it's out in the wild as it were. |
Thanks a lot, @phillxnet ... Cheers! |
@FroggyFlox As discussed side channel, I'm intending to use this pr as the first entrant to the post "Build on openSUSE' Stable channel release (currently in final Release Candidate phase) so we can at least 'field' it a little in a short testing run before it's consequent rpm builds are also, in turn, promoted to the Stable channel if all looks to be OK in the wild. This way we can get this into Stable fairly quickly to leave the testing channel free for our pending, longer term/task, technical debt Django/Python/etc updates which will inevitably shake things up as we move towards the subsequent stable releases there after. |
Thanks a lot for the consideration, @phillxnet ! I too am looking forward to seeing feedback from users on the new features introduced by this PR as this contains quite a bit of novelties. I already have a branch for related documentation updates (https://github.com/FroggyFlox/rockstor-doc/tree/WIP_Docker_networking), but I don't think it should get in the docs before this PR gets to Stable... |
I am currently reviewing this pr and so far so good, as usual with @FroggyFlox pr's :). I had previously stated:
As-is, we are on RC 5 (ver 4.0.4) rpms in our "Build on openSUSE" endeavour and due to an issue / over site on my part we had to populate the stable channel with this same 4.0.4 rpm version. As such we can now release an rpm with this significant code change into testing only, for a bit, and preserve our current stable / testing channel: if only as RC5 in stable and RC6 in testing. Just keen to get this into field testing so we can establish it as the 'new norm' for our Rock-ons infrastructure ready for the final stable release once we are done with the Release Candidates that currently populate both testing and stable channels. @FroggyFlox Apologies for the massive delay on this one. But at least we can now hopefully get this in ready for our final 4 release which will be nice. Especially given the massive amount of preparation you've put in. Currently, as we've discussed side channel already, I've only notice the one unit test failing due to the assumption of a docker deamon running (I think). This is no show stopper so I'll continue and hopefully get this merged as given you've had to re-base multiple times already I really need to get my merging/release act together. Thanks for taking the time and effort to be so detailed and fastidious in the preperation of both this code contribution and it's presentation within this pr. Helps massively for my limited cognitive ability :) . |
This is a nice re-use of an existing failed/legacy mechanism. Well done.
Nice.
Another nice.
Well done. We really need to flag these host networking Rock-ons in the next testing run post getting the Stable out. |
@FroggyFlox
I'm assuming currently that I'm doing it wrong? Apologies but still getting to grips with this docker networking thing. Or do we have some kind of race condition with this 'path'. Again not a show stopper but noting just in case this is a corner case or some king of unintended use. |
@FroggyFlox The resulting system then shows no Rock-ons installed. But we have:
So it looks like the above use/miss-use throws our db reflection of state out. |
Mmm... curious... This is the expected error when trying to connect a docker container to a docker network if the container is using host networking. This is actually exactly why I tried to implement an exception and disable networking edits for this kind of rock-ons. I'll have to see later tonight why it wasn't detected as such, though. Thanks for finding that one. |
By curiosity, what does the following return?
|
OK, so at least it's understood by at least one of us :). Thanks for such a quick response. Again not a show stopper for testing channel just wanted to note as I went along in case I stumble across something. Netdata does use the -host networking. If it's just that then it's likely a simple bug. I'll proceed on that assumption as there is likely to be one or two in this amount of code addtion. I think, as we are re-installing (removing and re-installing), and get an exception, we end up with a not installed state.
|
But still not showing in the Web-UI. Attempting a work around for now so we can know how to get folks out of this as still keen to get this merged and into the wild for proper testing. |
Thanks a lot for all the feedback... That's the correct output but somehow we're not catching/parsing that correctly :-( I'll have a look when I can and try to implement your feedback on how to deal with this state. For a workaround, here I think we "simply" need to run our |
System - Network: delete the previously added to netdata "rocknet1" |
But we then get the following:
@FroggyFlox is this our network db entries just being brought back in line with system state? EDIT:
|
@FroggyFlox OK, I'm pretty sure this is just an re-sync process as all looks to be functional still. And further refreshes don't exibit this. Also I can add a 'user' / custom rocknet just fine and it shows up as expected. This is such a significant added capability: The above also confirms the edit capability on the user rocknet and not on the auto added rock-on defined (required for function) rocknets. Yet another nice. |
Yes, that is how the |
# In some case, DNET inspect does NOT return Gateway in Docker version 18.09.5, build e8ff056 | ||
# This is likely related to the following bug in which the 'Gateway' is not reported the first | ||
# time the docker daemon is started. Upon reload of docker daemon, it IS correctly reported. | ||
# https://github.com/moby/moby/issues/26799 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIce find. Thanks for documenting this in-code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@FroggyFlox This is all super impressive and as per general comments to date looks to be functioning as expected: thanks for the ongoing comments. As you have explained we have a bug here or there but that's hardly surprising in this amount of code addition. But we can address them in smaller more manageable pull requests once we have this merged. I've mainly done a functional review given your greater expertise in this code area so give our discussions to date I'm going to merge this as-is so we can move to publishing in the testing channel first and address the niggles in time as we go along. It would be a terrible shame for this pr to code rot and I've taken so long reviewing it that there is a danger of this happening if we don't get it merged.
Thanks again for again raising our game.
As always much appreciated.
Fixes #1982.
Fixes #2003.
Fixes #2013.
Fixes #2009.
@phillxnet, ready for review.
As detailed in #1982, our current inter-containers communication in multi-containers rock-ons is not functional due to the deprecation of containers links by Docker. As originally proposed by @flukejones, we should switch to using docker networks instead.
This pull-request thus represents a fix for inter-containers communication by following @flukejones idea and recommendation. It also takes the opportunity to expand upon this and proposes a deeper integration of Docker networks into Rockstor.
Note: As this PR includes a substantial amount of features and testing, each main point will be split into separate posts.
Immense thanks to @phillxnet for his critical intervention and restructuration of the corresponding branch in the development of this PR.
Overall aims & logic
The proposed implementation of docker networks (referred to as rocknets within Rockstor) can be split into three main parts:
containers_links
object in rock-ons definitions.In point 1, this PR follows @flukejones' idea and simply uses the existing
container_links
object in the rock-on json to create a dedicated docker network linking the two containers defined in thecontainer_link
. Notably, as these docker networks are defined in the rock-on json, they are deemed critical for proper function of the rock-on and thus not editable by the user. They are still surfaced to the webUI for conveniency, however.While rocknets (created in points 2 and 3 above) are using the same docker networks as those defined in
containers_links
, they differ in that they are created by the user. As a result, they are editable by the user and can be used to connect any container(s) the user desires.