Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Rock-ons] UI for adding a new docker network #2009

Closed
FroggyFlox opened this issue Jan 18, 2019 · 7 comments · Fixed by #2207
Closed

[Rock-ons] UI for adding a new docker network #2009

FroggyFlox opened this issue Jan 18, 2019 · 7 comments · Fixed by #2207

Comments

@FroggyFlox
Copy link
Member

This is an issue dedicated to step 3 of the Docker networks (re)work (#1982), corresponding to the implementation of an interface for the creation of a docker network.

As discussed in #1982, this would be better integrated in the existing "System > Network" part of Rockstor's UI. My current idea would thus be to simply add a new option in the "Add Connection" section to create a docker network, as seen below:
image

In order to follow the same level of customization than what is offered for system connections, we can keep the same configuration method set as "Auto" by default, with the possibility of selecting "Manual" parameters. In the latter case, docker specific fields will appear, corresponding to the options offered by the docker network create command.

As per the docker documentation, these are as follows:

--attachable		Enable manual container attachment
--aux-address		Auxiliary IPv4 or IPv6 addresses used by Network driver
--config-from		The network from which copying the configuration
--config-only		Create a configuration only network
--driver		        Driver to manage the Network
--gateway		IPv4 or IPv6 Gateway for the master subnet
--ingress		Create swarm routing-mesh network
--internal		Restrict external access to the network
--ip-range		Allocate container ip from a sub-range
--ipam-driver		IP Address Management Driver
--ipam-opt		Set IPAM driver specific options
--ipv6		Enable IPv6 networking
--label		Set metadata on a network
--scope		Control the network’s scope
--subnet		Subnet in CIDR format that represents a network segment
--opt               Set driver specific options:
     com.docker.network.bridge.name	bridge name to be used when creating the Linux bridge
     com.docker.network.bridge.enable_ip_masquerade	Enable IP masquerading
     com.docker.network.bridge.enable_icc	Enable or Disable Inter Container Connectivity
     com.docker.network.bridge.host_binding_ipv4	Default IP when binding container ports
     com.docker.network.driver.mtu	Set the containers network MTU

In our case, we would need to support only the bridge driver (to begin with, at least), which leaves us with the following parameters:

--aux-address Auxiliary IPv4 or IPv6 addresses used by Network driver
--gateway IPv4 or IPv6 Gateway for the master subnet
--internal Restrict external access to the network
--ip-range Allocate container ip from a sub-range
--ipv6 Enable IPv6 networking
--subnet Subnet in CIDR format that represents a network segment
--opt , -o Set driver specific options, as follows:
com.docker.network.bridge.enable_ip_masquerade Enable IP masquerading
com.docker.network.bridge.enable_icc Enable or Disable Inter Container Connectivity
com.docker.network.bridge.host_binding_ipv4 Default IP when binding container ports
com.docker.network.driver.mtu Set the containers network MTU

As was also discussed in #1982, we can also allow the users to select one or more existing containers to add to the network at this step, although this may be the subject of a separate issue + PR.

@Luke-Nukem and @phillxnet, as you both participated in this prior discussion, thanks for your feedback on any of the parameters above. I don't think enabling anything IPv6-related would be useful, for instance, as I believe Rockstor's UI does not support it yet, for instance.

As mentioned above, I'm currently working on this issue and should be done hopefully quite soon. There are some elements in this work and the one described in #2003 that depend on a pending PR (#1999), however, so I'll continue working on these and refining them until then.

@FroggyFlox
Copy link
Member Author

I now have a working implementation that addresses this issue.

It allows to create a docker network using the existing Add a new connection UI following the same concept as for other connections supported by Rockstor:

  1. an "Auto" mode by default that uses all default settings used by docker upon network creation.
  2. a "Manual" mode that allows the user to specify either one of the parameters supported by docker network create listed above.

One can also edit an existing docker network (only if it does not correspond to a docker network created during a Rock-on install through the container_links object). Upon edit, the network is disconnected from any attached containers, deleted, and re-created using new settings, before being reconnected to all the containers that were attached to the original network.

@phillxnet
Copy link
Member

@FroggyFlox As per our ongoing discussion re kipple in the docker system, which concerned containers / images thus far, I've just had another quick look at 'docker system prune'. Noting here as may serve us in removing orphaned networks also:

docker system prune
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all dangling images
        - all build cache
Are you sure you want to continue? [y/N]

Apologies if this comment is incorrectly located.

@FroggyFlox
Copy link
Member Author

Thanks @phillxnet, that's definitively something I'd like to implement in the webUI and that's the best place for noting this so far.
I really do need to create a dedicated issue for this "advanced scripts" section that has been mentioned quite a bit now, so that we can keep all of these centered in one place. I'll try to gather these first before opening a new issue.
Thanks!

@thailgrott
Copy link

thailgrott commented May 2, 2019

Kinda also related to #1982. May be useful to have the concept of frontend and backend networks. A frontend network may be used for proxy traffic. A backend network may be used to communicate with a database which keeps the traffic private between the two containers.

@FroggyFlox
Copy link
Member Author

FroggyFlox commented May 2, 2019

Kinda also related to #1982. May be useful to have the concept of frontend and backend networks. A frontend network may be used for proxy traffic. A backend network may be used to communicate with a database which keeps the traffic private between the two containers.

Hi @KookyKrane , thanks for your suggestion, I'm currently in the polishing steps of this work so I'm actively seeking feedback; yours is thus much appreciated!
In what I have so far, docker networks will be "implemented" in two different ways in Rockstor.

The first one is through container links (as detailed in #2003, following #1982 as you pointed out). This will allow us to keep compatibility with our existing rock-ons that use the --link option seamlessly. These will essentially be docker networks created automatically at rock-on install, but won't be editable from Rockstor's webUI as their setup will be defined in the rock-on's json file.

The second one will be through user-defined container networks (maybe termed rocknets). In opposition to the container links described above, these will be defined solely by the user and independently from a rock-on. They will support all docker network options described above in this issue, and the user will be able to connect any container of her/his choice to it. For convenience, the user will also be able to connect any container of any installed rock-on to any "rocknet" from the rock-on's settings page (see #2013), as well as create them on the fly.

With proper documentation and examples, I believe this should cover the concept of frontend and backend networks you described... would that indeed be the case? Don't hesitate to share details if you have something more specific in mind, I would love to improve on my current where possible.

@thailgrott
Copy link

@FroggyFlox Thank you for the explanation of your implementation with considerations for link compatibility versus the new user defined container networks. I'm wondering how the user defined container network would be used when there are multiple containers in a Rockon. Would the Rockon define which containers need an external network which would be added by the user? Or will every Rockon be limited to a single network?

@FroggyFlox
Copy link
Member Author

Hi @KookyKrane, thank you for your interest and feedback.
I'll try to answer your questions below, but please keep in mind that even though I am close to done with this work, this is still a work-in-progress and what I'm going to describe below is susceptible to change substantially if a serious problem is detected. Of course, that also means your feedback and suggestions for improvement are more than welcome.

Would the Rockon define which containers need an external network which would be added by the user? Or will every Rockon be limited to a single network?

Briefly, neither. I tried to stay as close as possible to what docker allows and what would be possible from the command line so everything related to the user-defined docker networks (so-called rocknets) will be at the container level. This means that the user will be able to attach each container of a rock-on to one or more (or none) rocknet, and each rocknet can be attached to one or more containers regardless of the rock-on to which they belong to. This way, one may be able to attach different containers from different rock-ons to the same rocknet if desired--in order to place them behind a reverse-proxy ran from yet another rock-on, for instance.
This customization would be possible on any rock-on, which means that any previously-installed rock-on would be eligible. We could restrict this to only specific containers of a rock-on, but that would require all our existing rock-ons definition to be re-written accordingly, and might also require the users to re-install all their rock-ons in order to benefit from the feature. As a result, I believe it is a lot better not to put such restrictions.

As always, a picture can be very helpful in illustrating what I just wrote, so have a look at the screenshot below (please note that the "look" of it is still susceptible to change):

image

In this example above, the rock-on includes two containers: alpine2p1 and alpine2p2, and each container can be attached separately to a rocknet of choice. Note how, in this example, the rocknet dnet01 is attached to both containers, whereas dnet02 is attached to only one of the containers (alpine2p2).

It can be quite complicated to implement a feature based on the inner workings of the rock-ons while maintaining the simplicity and intuitive use that the rock-ons bring, but I believe this keeps a decent balance between enhanced customization and simplicity, especially if supported by a good documentation (note the help icon linking to that).

I hope I was able to answer your questions, and thanks again for any feedback you would have on it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants