-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with running Pi-hole 4.1.1 in swarm (--cap-add) #392
Comments
You can add |
I can confirm this does work around the issue. Thanks Adam! |
What is your Setup for swarm? I think the issue is related to the gateway, since it can't reach the default gateway, but I don't know how to tackle this
TIA Edit: So I fixed it by changing the DNS-Settings to listen on all interfaces, not only on eth0, but I'm still irritated by the |
Same issue here, running pi-hole on Swarm successfully for months, until this problem came up.
However, just as above, even-though it's blocking ads, the log still shows the following warning:
And ultimately logs errors as well:
My compose file does override DNS entries:
Now, I'm not sure why pi-hole is expecting /etc/resolv.conf to always have the value 127.0.0.1, given that for user-defined networks docker always defines 127.0.0.11 instead, and internally manages resolution (according to their documentation: https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/). Hopefully those errors in the logs won't affect pi-hole's stability. |
i have the same issue and not in swarm mode. single docker container running on a pi-b+. a problem with the image?
|
127.0.0.11 is the default address for docker's resolv.conf I believe so it seems swarm isn't letting you overwrite it by the standard --dns method. This sounds a lot like what synology has happen and they've worked around the problem by force overwriting /etc/resolv.conf with a read only volume containing the setting you want: http://tonylawrence.com/posts/unix/synology/running-pihole-inside-docker/ |
I did try mounting /etc/resolve.conf however that caused other issues (permission related, since chown fails on that file). Not sure if it might have something to do with the way I personally have it set up:
But that setup no longer worked in the latest release, and the only way to get it to work again was by adding the FTL_CMD=debug and DNSMASQ_LISTENING=all environment variables. |
+1 Here...
follow some logs ....
However, still complains about DNS Service not running.
right now it is working even tough it complains dns is not working. |
Related to: |
even though it gets working with @pabloromeo workaround, all the pihole reporting gets messed up as pi hole sees the query as if they are originating from a "10.255.0.3" IP address instead of the correct hosts IP address. |
Is 10.255.0.3 one of swarm's docker networks? I'm unfamilar with that range. |
What I've noticed, and I hope I'm not overstating the obvious, is that this issue only seems to happen on my (user-generated) bridge network. When --network=dns-net, the --dns parameters are ignored, and I get the 127.0.0.11 error. When --network=host, the --dns parameters are used, and everything starts except that the blocklists are not downloaded:
The relevant part of my run command is:
|
Hello, I can confirm that withtout Here is a working version of compose file for me (only DNS filtering, not DHCP server) version: '3'
services:
pihole:
container_name: pihole
hostname: pihole.lebeau.ovh
image: pihole/pihole:latest
#ports:
# - "53:53/tcp"
# - "53:53/udp"
# - "8080:80/tcp"
network_mode: "host"
environment:
TZ: 'Europe/Paris'
IPv6: 'False'
ServerIP: '192.168.1.10'
INTERFACE: 'eth1'
DNSMASQ_USER: 'pihole'
WEB_PORT: '8080'
volumes:
- '/home/pihole/pihole/:/etc/pihole/'
- '/home/pihole/dnsmasq.d/:/etc/dnsmasq.d/'
dns:
- 127.0.0.1
- 192.168.1.1
restart: unless-stopped
cap_add:
- NET_ADMIN |
A workaround on swarm for the lack of capabilities, is to use macvlan. And my compose file is as follows. The admin interface is behind a traefik proxy.
|
Did a solution present itself as I have a couple of piholes running in Swarm and the src IPs logged are from the docker ingress network and NOT the actual client IP querying the piholes? |
In my case I do get the proper IPs of the clients, and even host name resolution from the router (by configuring the router IP in the pihole UI). Because of that, I don't actually need to run more than one replica of the service, if it ever goes down a new one will come up automatically so I haven't needed additional redundancy. Macvlan may be an option, but the problem with that, is that each swarm node would need a different IP-range to avoid conflicts, which would mean that there wouldn't be a single fixed IP to specify in the router DNS config. That's why i went with keepalived and one single VIP. |
@pabloromeo Can you list your compose-file for pihole? |
@MadsBen sure. Now, take into account that I'm doing this in tandem with Keepalived running on the whole cluster with a fixed virtual IP pointing at whichever node is running pihole at the moment (through a keepalived check script that tests if the current node is using port 53, if so then this should be the master node for that VIP in keepalived). The I just configure my network router to use that VIP as Primary DNS. The check_dns script for Keepalived is very simple:
Regarding your container not starting up i'd recommend tailing the logs for it during startup. There should be error information there. I run pihole using the following:
|
@pabloromeo Thx for your help. I already had keepalived running, but struggled with publishing the ports to the host for the docker container. I'm running Ubuntu and turns out, that was the problem, as it already runs a dnsmasq, listening on port 53... |
@pabloromeo this is slightly off topic. why the need for keepalived? If one has nodea and nodeb and pihole only runs on those AND one specifies dns1=nodea and dns2=nodeb on every client wouldn't DNS continue to resolve? Also do you replicate that nfs share in anyway so that you cope with loss of any one machine running the share? Also to add value to this thread. If using portainer you have to use the aternate format for env vars or they don't seem to work.
|
@scyto sure, that would also work if you run it on two nodes. My scenario was a bit different. If I remember correctly that was the thinking behind Keepalived :) |
@pabloromeo thank! That really helps my mental model. I discovered the single client ip issue when I set up my first swarm node last night. Next up move to host or macvlan. Do you recommend running keepalived in docker or on the hosts directly? |
@scyto in my case I run keepalived through docker too. |
"...have keepalived running in the cluster with a virtual IP pointing only to the current host listening on port 53 (the one actively running pihole)." I've searched for how to setup this, but can't find any solution. Can you explain how you did this? I understand how I can setup a VIP on the hosts or using a docker based keepalived image. But I can't understand how to make sure the active VIP points to the node running the pihole docker container? |
Docker Swarm now supports |
Ah, I see. So, the way I get it to run the VIP where pihole is, is using a basic check script, that all it does, is check if port 53 is in use. check_dns.sh:
And I make that script available to the docker container in runtime in /etc/keepalived/check_dns.sh
And within the VRRP instance block you add the track_script section to use that check:
So nodes that aren't running anything on port 53 will fail the chk_dns, and only the one with pihole will have a successful status code and become MASTER of that VIP. |
In raising this issue, I confirm the following:
{please fill the checkboxes, e.g: [X]}
How familiar are you with the the source code relevant to this issue?:
{1}
Expected behaviour:
`{Up until 4.1.1 I have been running pihole as a docker swarm service on a x64 linux cluster running ubuntu 16.04 (five node swarm). This required a little modifying of the normal install, but only in how to start up the image, no real functionality changes. This has allowed me to scale pihole to 2 replicas for redundancy and be able to allow pihole to run on any of the five hosts in the swarm should one fail.
}`
Actual behaviour:
`{Once i pulled the latest image (4.1.1) my service would no longer boot at all. I even removed my service and reset it up thinking it was something I accidently changed since i was trying to also configure placement preferences. Eventually it led me to the image repository and i noticed changes were made that required flags be added that are not allowed in swarm.
}`
Steps to reproduce:
`{I realize this is somewhat of a special config since running pihole as docker service is not the way it is typically ran, however swarm and compose are very similar in most regards other then the issue i found...
After some testing and having my service run a prior image, i have concluded that --cap-add=NET_ADMIN is the culprit that breaks pihole running as a service. I can see that --cap-add is not supported in swarm for security reasons and docker/moby says "it's being worked on" but it has been two+ years of "working on it".
Since this is not standard i'm willing to work with how to better articulate the issue if it is not clear or setup any sort of demo of the issue.
}`
Debug token provided by uploading
pihole -d
log:{Service will not start, debug token unavailable.}
Troubleshooting undertaken, and/or other relevant information:
`{setting my image to 4.1 solves the issue and lets the service startup, i fear though, that --cap-add won't be supported in the near or even medium future and therefore i will be stuck on a old image that may or may not have issues in the future with compatibility.
docker service inspect pihole:
{
"AppArmorProfile": "docker-default",
"Args": [],
"Config": {
"ArgsEscaped": true,
"AttachStderr": false,
"AttachStdin": false,
"AttachStdout": false,
"Cmd": null,
"Domainname": "",
"Entrypoint": [
"/s6-init"
],
"Env": [
"WEBPASSWORD=",
"TZ=Eastern",
"IPv6=False",
"ServerIP=127.0.0.1",
"DNS1=1.1.1.1",
"DNS2=1.0.0.1",
"PATH=/opt/pihole:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"S6OVERLAY_RELEASE=https://github.com/just-containers/s6-overlay/releases/download/v1.21.7.0/s6-overlay-amd64.tar.gz",
"PIHOLE_INSTALL=/root/ph_install.sh",
"PHP_ENV_CONFIG=/etc/lighttpd/conf-enabled/15-fastcgi-php.conf",
"PHP_ERROR_LOG=/var/log/lighttpd/error.log",
"S6_LOGGING=0",
"S6_KEEP_ENV=1",
"S6_BEHAVIOUR_IF_STAGE2_FAILS=2",
"FTL_CMD=debug",
"VERSION=v4.1",
"ARCH=amd64"
],
"ExposedPorts": {
"443/tcp": {},
"53/tcp": {},
"53/udp": {},
"67/udp": {},
"80/tcp": {}
},
"Healthcheck": {
"Test": [
"CMD-SHELL",
"dig @127.0.0.1 pi.hole || exit 1"
]
},
"Hostname": "HPC2",
"Image": "pihole/pihole:4.1@sha256:3c165a8656d22b75ad237d86ba3bdf0d121088c144c0b2d34a0775a9db2048d7",
"Labels": {
"com.docker.swarm.node.id": "qkldc61gsxbqniii2rpkqjwlb",
"com.docker.swarm.service.id": "4ym908jc96qxvqnxuczn48mlh",
"com.docker.swarm.service.name": "pihole",
"com.docker.swarm.task": "",
"com.docker.swarm.task.id": "nb9cs1ietpry76pxdqf6acolj",
"com.docker.swarm.task.name": "pihole.1.nb9cs1ietpry76pxdqf6acolj",
"image": "pihole/pihole:v4.1_amd64",
"maintainer": "[email protected]",
"url": "https://www.github.com/pi-hole/docker-pi-hole"
},
"OnBuild": null,
"OpenStdin": false,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2019-01-14T14:14:46.320931851Z",
"Driver": "overlay2",
"ExecIDs": [
"8b81d48090e6d2cb05f7b79c8b1c70377680ca98fc20e7b5049e093252affa4f",
"ab6630a1ccff10ae554acac07fd365d031e5621c9bf8088ed362ecdf6605f0e7"
],
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/7a7e3c7c9dd580732dbb4e248812fe156f454846d4adfcec1590a1148d56f5a7-init/diff:/var/lib/docker/overlay2/e4411ae9935a55823ed2e088892172b3ca26eccb18c12ad4f1d5ca797562f6c9/diff:/var/lib/docker/overlay2/f8acc507dc6e36cea4a286b4d3342aec43a53e38c3b0432f269abd4f1941bc0c/diff:/var/lib/docker/overlay2/bd4ce6bcd15554f56994abae241c770df8ac35fa02e17d5535e35fb13d3cdac3/diff:/var/lib/docker/overlay2/f9eb95e392611ec5b7fe11292e60be4f7a752c23455977f74a1a7965305e1d85/diff:/var/lib/docker/overlay2/b9787676c7ff67d298173c596bf712c81ec9b76b9c76ab5b404e8cae0b4f802d/diff:/var/lib/docker/overlay2/a686ee271505c41b38fd3b6a14a33d24dbc573166cf43f14c31753d997b78230/diff:/var/lib/docker/overlay2/73b4f186c76919a4ab386630bdd895eb12f522fb0899af5ff34ff958ec51bd88/diff:/var/lib/docker/overlay2/e6962d8e85bacab08f8912263a1b36152245f249de77577ebb633befd14aa9e9/diff:/var/lib/docker/overlay2/db004dda0cee0d98f4e691e1bcfc340d88e8e8a9946771c0c531381cec4d6954/diff:/var/lib/docker/overlay2/bbc8131790dadd3afe9f405fe55ebe92e38a6a619f337961e6c6b7412281282f/diff",
"MergedDir": "/var/lib/docker/overlay2/7a7e3c7c9dd580732dbb4e248812fe156f454846d4adfcec1590a1148d56f5a7/merged",
"UpperDir": "/var/lib/docker/overlay2/7a7e3c7c9dd580732dbb4e248812fe156f454846d4adfcec1590a1148d56f5a7/diff",
"WorkDir": "/var/lib/docker/overlay2/7a7e3c7c9dd580732dbb4e248812fe156f454846d4adfcec1590a1148d56f5a7/work"
},
"Name": "overlay2"
},
"HostConfig": {
"AutoRemove": false,
"Binds": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceWriteIOps": null,
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"CapAdd": null,
"CapDrop": null,
"Cgroup": "",
"CgroupParent": "",
"ConsoleSize": [
0,
0
],
"ContainerIDFile": "",
"CpuCount": 0,
"CpuPercent": 0,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpuShares": 0,
"CpusetCpus": "",
"CpusetMems": "",
"DeviceCgroupRules": null,
"Devices": null,
"DiskQuota": 0,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": [
"HPC1:172.16.0.191",
"HPC2:172.16.0.192",
"HPC3:172.16.0.193",
"HPC4:172.16.0.194",
"HPC5:172.16.0.195"
],
"GroupAdd": null,
"IOMaximumBandwidth": 0,
"IOMaximumIOps": 0,
"IpcMode": "shareable",
"Isolation": "default",
"KernelMemory": 0,
"Links": null,
"LogConfig": {
"Config": {},
"Type": "json-file"
},
"MaskedPaths": [
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"Mounts": [
{
"Source": "/etc/pihole",
"Target": "/etc/pihole",
"Type": "bind"
},
{
"Source": "/etc/dnsmasq.d",
"Target": "/etc/dnsmasq.d",
"Type": "bind"
},
{
"Source": "/etc/hosts",
"Target": "/etc/hosts",
"Type": "bind"
}
],
"NanoCpus": 0,
"NetworkMode": "default",
"OomKillDisable": false,
"OomScoreAdj": 0,
"PidMode": "",
"PidsLimit": 0,
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyPaths": [
"/proc/asound",
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
],
"ReadonlyRootfs": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"Runtime": "runc",
"SecurityOpt": null,
"ShmSize": 67108864,
"UTSMode": "",
"Ulimits": null,
"UsernsMode": "",
"VolumeDriver": "",
"VolumesFrom": null
},
"HostnamePath": "/var/lib/docker/containers/724d84a1d0424866984fd3d5e1063a49372b98ac617e6da776d7d3d15597b8ad/hostname",
"HostsPath": "/etc/hosts",
"Id": "724d84a1d0424866984fd3d5e1063a49372b98ac617e6da776d7d3d15597b8ad",
"Image": "sha256:d2cae28ed1651910f7a2317594bd6d566cda90613eb3911cde92860630f81d95",
"LogPath": "/var/lib/docker/containers/724d84a1d0424866984fd3d5e1063a49372b98ac617e6da776d7d3d15597b8ad/724d84a1d0424866984fd3d5e1063a49372b98ac617e6da776d7d3d15597b8ad-json.log",
"MountLabel": "",
"Mounts": [
{
"Destination": "/etc/pihole",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/etc/pihole",
"Type": "bind"
},
{
"Destination": "/etc/dnsmasq.d",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/etc/dnsmasq.d",
"Type": "bind"
},
{
"Destination": "/etc/hosts",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/etc/hosts",
"Type": "bind"
}
],
"Name": "/pihole.1.nb9cs1ietpry76pxdqf6acolj",
"NetworkSettings": {
"Bridge": "",
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"HairpinMode": false,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"MacAddress": "",
"Networks": {
"ingress": {
"Aliases": [
"724d84a1d042"
],
"DriverOpts": null,
"EndpointID": "f8ee7b5f03473fd0b62c8e2ca6fbd142b02e7c3a0d86f50e86e4c5df39c6b2f1",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAMConfig": {
"IPv4Address": "10.255.1.179"
},
"IPAddress": "10.255.1.179",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"Links": null,
"MacAddress": "02:42:0a:ff:01:b3",
"NetworkID": "szxnv7y1azs975wkgra7mc18s"
}
},
"Ports": {
"443/tcp": null,
"53/tcp": null,
"53/udp": null,
"67/udp": null,
"80/tcp": null
},
"SandboxID": "d55da5cd2e11a3d5c62a689e24d17deaed2aac729c4db74633488609fe3fe8ad",
"SandboxKey": "/var/run/docker/netns/d55da5cd2e11",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null
},
"Path": "/s6-init",
"Platform": "linux",
"ProcessLabel": "",
"ResolvConfPath": "/var/lib/docker/containers/724d84a1d0424866984fd3d5e1063a49372b98ac617e6da776d7d3d15597b8ad/resolv.conf",
"RestartCount": 0,
"State": {
"Dead": false,
"Error": "",
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"Health": {
"FailingStreak": 0,
"Log": [
{
"End": "2019-01-14T09:57:37.641382259-05:00",
"ExitCode": 0,
"Output": "\n; <<>> DiG 9.10.3-P4-Debian <<>> @127.0.0.1 pi.hole\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 14480\n;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 4096\n;; QUESTION SECTION:\n;pi.hole.\t\t\tIN\tA\n\n;; ANSWER SECTION:\npi.hole.\t\t2\tIN\tA\t127.0.0.1\n\n;; Query time: 0 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Mon Jan 14 14:57:37 Eastern 2019\n;; MSG SIZE rcvd: 52\n\n",
"Start": "2019-01-14T09:57:37.39541478-05:00"
},
{
"End": "2019-01-14T09:58:07.820750563-05:00",
"ExitCode": 0,
"Output": "\n; <<>> DiG 9.10.3-P4-Debian <<>> @127.0.0.1 pi.hole\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8257\n;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 4096\n;; QUESTION SECTION:\n;pi.hole.\t\t\tIN\tA\n\n;; ANSWER SECTION:\npi.hole.\t\t2\tIN\tA\t127.0.0.1\n\n;; Query time: 0 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Mon Jan 14 14:58:07 Eastern 2019\n;; MSG SIZE rcvd: 52\n\n",
"Start": "2019-01-14T09:58:07.651049203-05:00"
},
{
"End": "2019-01-14T09:58:38.046312332-05:00",
"ExitCode": 0,
"Output": "\n; <<>> DiG 9.10.3-P4-Debian <<>> @127.0.0.1 pi.hole\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23608\n;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 4096\n;; QUESTION SECTION:\n;pi.hole.\t\t\tIN\tA\n\n;; ANSWER SECTION:\npi.hole.\t\t2\tIN\tA\t127.0.0.1\n\n;; Query time: 0 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Mon Jan 14 14:58:37 Eastern 2019\n;; MSG SIZE rcvd: 52\n\n",
"Start": "2019-01-14T09:58:37.830246755-05:00"
},
{
"End": "2019-01-14T09:59:08.194864943-05:00",
"ExitCode": 0,
"Output": "\n; <<>> DiG 9.10.3-P4-Debian <<>> @127.0.0.1 pi.hole\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22643\n;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 4096\n;; QUESTION SECTION:\n;pi.hole.\t\t\tIN\tA\n\n;; ANSWER SECTION:\npi.hole.\t\t2\tIN\tA\t127.0.0.1\n\n;; Query time: 0 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Mon Jan 14 14:59:08 Eastern 2019\n;; MSG SIZE rcvd: 52\n\n",
"Start": "2019-01-14T09:59:08.055375217-05:00"
},
{
"End": "2019-01-14T09:59:38.337787788-05:00",
"ExitCode": 0,
"Output": "\n; <<>> DiG 9.10.3-P4-Debian <<>> @127.0.0.1 pi.hole\n; (1 server found)\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17238\n;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 4096\n;; QUESTION SECTION:\n;pi.hole.\t\t\tIN\tA\n\n;; ANSWER SECTION:\npi.hole.\t\t2\tIN\tA\t127.0.0.1\n\n;; Query time: 0 msec\n;; SERVER: 127.0.0.1#53(127.0.0.1)\n;; WHEN: Mon Jan 14 14:59:38 Eastern 2019\n;; MSG SIZE rcvd: 52\n\n",
"Start": "2019-01-14T09:59:38.203771777-05:00"
}
],
"Status": "healthy"
},
"OOMKilled": false,
"Paused": false,
"Pid": 377,
"Restarting": false,
"Running": true,
"StartedAt": "2019-01-14T14:14:51.821691216Z",
"Status": "running"
}
}
}`
The text was updated successfully, but these errors were encountered: