Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CLI support for running dockerd in a container on containerd #1260

Merged
merged 2 commits into from
Aug 20, 2018

Conversation

dhiltgen
Copy link
Contributor

@dhiltgen dhiltgen commented Aug 1, 2018

- What I did

This PR adds support to the Docker CLI to manage the lifecycle of a Docker Engine running as a privileged container on top of containerd. New CLI UX is introduced to initialize a local engine, check for available updates, and update the local engine. Additional UX is introduced to allow users to transition from a community based engine to an enterprise engine if they so desire, by downloading an existing enterprise license from the Docker Store or initiating a new free trial license.

Not included in this PR are packaging level changes to provide a streamlined user experience based on OS level packages, which allow users to continue to use OS level packages and related automation if they desire for installation and update operations.

- How I did it

The implementation leverages the containerd client API and interacts with a local containerd to download images from a registry (defaulting to Docker Hub) as well as a docker licensing library to interact with Docker Store for generating trial licenses and downloading existing licenses.

- How to verify it

Unit tests and a new e2e test are included in the PR. The e2e test runs a nested containerd in a container upon which a nested dockerd is launched.

Nightly builds for the engine will be published at https://hub.docker.com/r/docker/engine-community/tags/

Manual validation may be performed by installing containerd locally, then running this CLI, specifying a nightly version tag. For example:

root@daniel-testkit-CE988A-0:~# docker engine init --engine-version 0.0.0-20180730163646-c19b8d9
Pulling docker.io/docker/engine-community:0.0.0-20180730163646-c19b8d9... done.
docker.io/docker/engine-community:0.0.0-20180730163646-c19b8d9 already present in containerd.
Waiting for engine to start... waiting for engine to be responsive... engine is online.
Success!  The docker engine is now running.

- Description for the changelog

Add support for running dockerd on top of containerd

- A picture of a cute animal (not mandatory but encouraged)

screen_2018-08-01-14-48-06

@GordonTheTurtle
Copy link

Please sign your commits following these rules:
https://github.com/moby/moby/blob/master/CONTRIBUTING.md#sign-your-work
The easiest way to do this is to amend the last commit:

$ git clone -b "ce_q3" [email protected]:dhiltgen/cli-1.git somewhere
$ cd somewhere
$ git rebase -i HEAD~842354176960
editor opens
change each 'pick' to 'edit'
save the file and quit
$ git commit --amend -s --no-edit
$ git rebase --continue # and repeat the amend for each commit
$ git push -f

Amending updates the existing PR. You DO NOT need to open a new one.

@codecov-io
Copy link

codecov-io commented Aug 2, 2018

Codecov Report

Merging #1260 into master will increase coverage by 0.71%.
The diff coverage is 66.23%.

@@            Coverage Diff             @@
##           master    #1260      +/-   ##
==========================================
+ Coverage   54.05%   54.77%   +0.71%     
==========================================
  Files         272      292      +20     
  Lines       18113    19257    +1144     
==========================================
+ Hits         9791    10548     +757     
- Misses       7706     8055     +349     
- Partials      616      654      +38

@dhiltgen
Copy link
Contributor Author

dhiltgen commented Aug 3, 2018

The recent lint failure looks like a CI hiccup, unrelated to the changes I just pushed

...
Step 10/10 : COPY . .
 ---> 3b3e3fa522cc
Successfully built 3b3e3fa522cc
Successfully tagged cli-linter:15395
docker: Cannot connect to the Docker daemon at tcp://35.185.65.192:2376. Is the docker daemon running?.
ERRO[0030] error waiting for container: context canceled 
Exited with code 125

Could someone whack CI to re-run that again?

Copy link
Collaborator

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On design:

  • shouldn't we re-use the same "output formatting" when pulling images, etc… right now, it's telling me it's pulling something, I have no idea of the progress… We could use docker or containerd progress formatting.
  • should /etc/docker path be configurable ? For example, my system's /etc/ is read-only (managed by nixos), and I want to test/run this PR, couldn't I be able to specify where it's gonna write the configuration ?
    I though it was --config-file but
λ sudo ./build/docker-linux-amd64 engine init --config-file=/tmp/etc/docker
Failed to create docker daemon: failed to verify host dir /tmp/etc: Failed to create path /tmp/etc
  • it seems there is a need to login before having a docker daemon, and this is a problem as docker login needs a deamon — on this, we may want to ask for login(s) whenever the command the require them (listing license, …), and later on issuing a docker login on the newly bootstrapped daemon (programatically). The login client-side only seems hackish to me.
  • On update it's possible to change the version and the registry-prefix but not the engine-image, what is the rationale for that ? 🤔
  • I wished (might not be possible) that the license checking (mainly used in docker engine activate) would be plugable (as our credential helpers are) — this would remove the need to vendor docker/licensing code, but it would make it harder to migrate from CE to EE…

I still need to bootstrap a sandbox machine to test it fully though.

On code:

  • It would be really good to have unit tests on the engine package 👼
  • I feel like containerdutils package should be some sort of "wrapper" client that get's created with our "view" of docker and containerd client. That way we wouldn't need to pass them for each and every exported functions.
  • In general, command.Cli should be only used in cmd/* packages.. having a dependency on it in the containerdutils package seems like a smell…
  • This is a public facing repository, that is easily use as library… and in this context, any exported element (function, struct) should be considered stable.. So the less we exported at first, the better… This is not the case right now.
  • github.com/docker/licensing needs some tests, and it would be very handy if it were using either vndr or golang/dep tool

"strconv"
"strings"

"golang.org/x/net/context"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use context instead of this deprecated package.


"golang.org/x/net/context"

"github.com/containerd/containerd/namespaces"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, we group all non-stdlib imports, so those should be grouped

flags := cmd.Flags()

flags.StringVarP(&options.licenseFile, "license", "l", "", "License File")
flags.StringVarP(&options.engineVersion, "engine-version", "", "", "Specify engine version (default is to use existing version)")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those should be flags.StringVar when they don't have a shorthand counter-part


flags := cmd.Flags()

flags.StringVarP(&options.licenseFile, "license", "l", "", "License File")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can keep those long flag only for now 👼

flags.StringVarP(&options.engineVersion, "engine-version", "", "", "Specify engine version (default is to use existing version)")
flags.StringVarP(&options.registryPrefix, "registry-prefix", "", "docker.io/docker", "Override the default location where engine images are pulled")
flags.StringVarP(&options.format, "format", "", "", "Pretty-print licenses using a Go template")
flags.BoolVarP(&options.quiet, "queit", "q", false, "Only display available licenses by ID")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/queit/quiet

}
_, err = task.Delete(ctx)
if err != nil {
logrus.Debugf("Failed to delete task: %s", err)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

almost silently ignoring these errors 🤔 shouldn't we at least print some warnings ?

}
err = container.Delete(ctx, containerd.WithSnapshotCleanup)
if err != nil {
logrus.Debugf("Failed to delete container: %s", err)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here

if err != nil && strings.Contains(err.Error(), "no such file or directory") {
err = container.Delete(ctx, containerd.WithSnapshotCleanup)
if err != nil {
return fmt.Errorf("Failed to delete container to evaluate host dir %s: %s", target, err)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use errors.Wrapf here too

var config map[string]interface{}
err = json.Unmarshal(configFileData, &config)
if err != nil {
return "", fmt.Errorf("Malformed config file - %s: %s", configFile, err)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use errors.Wrapf here too

flags.StringVarP(&options.engineVersion, "version", "", "", "Specify engine version (default is to use existing version)")
flags.StringVarP(&options.registryPrefix, "registry-prefix", "", "docker.io/docker", "Override the default location where engine images are pulled")
flags.StringVarP(&options.format, "format", "", "", "Pretty-print licenses using a Go template")
flags.BoolVarP(&options.quiet, "queit", "q", false, "Only display available licenses by ID")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/queit/quiet

Copy link
Collaborator

@cpuguy83 cpuguy83 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a deeper review, but so far my comments are generally:

  1. Functionality needs ownership. Right now all functionality is scoped to a package when it should be scoped to a type.
  2. Errors, especially in new functionality that's added here, should be well typed so they can be properly inspected (e.g. not by checking the message itself).
  3. Interfaces are quite broad, both in the new interfaces being used and the interfaces being passed in to the new functionality. This should be scoped down so, for instance, a function that just needs an io.Writer doesn't need the whole docker CLI object.
  4. The fakeClient approach where there is a struct that does nothing but store configured functions is not very clean and largely redundant. Might as well just make new objects with the needed functionality. Note you can have a base struct that provides "not implemented" sort of behavior and wrap that with specific functionality for what's being tested... but also scoping interfaces down should help a lot here.

return fmt.Errorf("Unable to access local containerd: %s", err)
}
defer cclient.Close()
return containerdutils.InitEngine(ctx, dockerCli, dclient, cclient, options.image, options.version, options.registryPrefix, options.configFile)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also wy are we passing both client and dockerCli? There's a few places this is done.

@@ -126,7 +130,13 @@ func runLogin(dockerCli command.Cli, opts loginOptions) error { //nolint: gocycl

response, err = clnt.RegistryLogin(ctx, *authConfig)
if err != nil {
return err
if strings.Contains(err.Error(), "annot connect to ") {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we get a real error type?

)

type (
fakeDockerClient struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤕

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I see the purpose in these structs that do nothing except take functionality from other things.
Why not provide specific structs? If the interface you are trying to satisfy is too large, can't we narrow down the interface?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's purely a testing abstraction. I looked around in this repo to find a pattern to emulate with the hope that I'd be following the preferred style and saw https://github.com/docker/cli/blob/master/cli/command/manifest/client_test.go which uses this model.

@cpuguy83 can you point me to a different/better reference you would prefer?

If this model is explicitly not what we want people to follow, could a maintainer put a comment in those files noting that those tests should be refactored so others don't fall into the same trap I did thinking this pattern is OK?

}

// GetContainerdClient verifies the containerd socket is present and writable by the current user
func GetContainerdClient() (*containerd.Client, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we shouldn't bother with this.

}

// PullWithAuth will use the credential cache auth to pull via containerd
func PullWithAuth(cli command.Cli, client WrappedContainerdClient, imageName string) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all too much.
Let's consider what type should own this behavior.... e.g. a wrapped client.

EnterpriseEngineImage = "engine-enterprise"

// EngineSpec is the default container spec for the docker engine
EngineSpec = specs.Spec{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this should be a function to get the default spec (since that's the only way it could be immutable)...
This will probably work, though.


type (
// WrappedContainerdClient abstracts the containerd client to aid in testability
WrappedContainerdClient interface {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should also be scoped to the call site, not the package.

)

type (
fakeLicensingClient struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above for other of these fake clients.

LicensingPublicKey = "LS0tLS1CRUdJTiBQVUJMSUMgS0VZLS0tLS0Ka2lkOiBKN0xEOjY3VlI6TDVIWjpVN0JBOjJPNEc6NEFMMzpPRjJOOkpIR0I6RUZUSDo1Q1ZROk1GRU86QUVJVAoKTUlJQ0lqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FnOEFNSUlDQ2dLQ0FnRUF5ZEl5K2xVN283UGNlWSs0K3MrQwpRNU9FZ0N5RjhDeEljUUlXdUs4NHBJaVpjaVk2NzMweUNZbndMU0tUbHcrVTZVQy9RUmVXUmlvTU5ORTVEczVUCllFWGJHRzZvbG0ycWRXYkJ3Y0NnKzJVVUgvT2NCOVd1UDZnUlBIcE1GTXN4RHpXd3ZheThKVXVIZ1lVTFVwbTEKSXYrbXE3bHA1blEvUnhyVDBLWlJBUVRZTEVNRWZHd20zaE1PL2dlTFBTK2hnS1B0SUhsa2c2L1djb3hUR29LUAo3OWQvd2FIWXhHTmw3V2hTbmVpQlN4YnBiUUFLazIxbGc3OThYYjd2WnlFQVRETXJSUjlNZUU2QWRqNUhKcFkzCkNveVJBUENtYUtHUkNLNHVvWlNvSXUwaEZWbEtVUHliYncwMDBHTyt3YTJLTjhVd2dJSW0waTVJMXVXOUdrcTQKempCeTV6aGdxdVVYYkc5YldQQU9ZcnE1UWE4MUR4R2NCbEp5SFlBcCtERFBFOVRHZzR6WW1YakpueFpxSEVkdQpHcWRldlo4WE1JMHVrZmtHSUkxNHdVT2lNSUlJclhsRWNCZi80Nkk4Z1FXRHp4eWNaZS9KR1grTEF1YXlYcnlyClVGZWhWTlVkWlVsOXdYTmFKQitrYUNxejVRd2FSOTNzR3crUVNmdEQwTnZMZTdDeU9IK0U2dmc2U3QvTmVUdmcKdjhZbmhDaVhJbFo4SE9mSXdOZTd0RUYvVWN6NU9iUHlrbTN0eWxyTlVqdDBWeUFtdHRhY1ZJMmlHaWhjVVBybQprNGxWSVo3VkQvTFNXK2k3eW9TdXJ0cHNQWGNlMnBLRElvMzBsSkdoTy8zS1VtbDJTVVpDcXpKMXlFbUtweXNICjVIRFc5Y3NJRkNBM2RlQWpmWlV2TjdVQ0F3RUFBUT09Ci0tLS0tRU5EIFBVQkxJQyBLRVktLS0tLQo="
)

type (
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm pretty sure I've never seen this done.... not that it shouldn't be done, just that it's not idiomatic in the sense that none of the other code does this.
Can we stick to type Foo T?

vendor.conf Outdated
@@ -85,4 +85,15 @@ k8s.io/client-go kubernetes-1.11.0
k8s.io/kube-openapi d8ea2fe547a448256204cfc68dfee7b26c720acb
k8s.io/kubernetes v1.11.0
vbom.ml/util 256737ac55c46798123f754ab7d2c784e2c71783

github.com/containerd/fifo 3d5202a
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, is this used? Don't see why we'd need fifo here.

@cpuguy83
Copy link
Collaborator

cpuguy83 commented Aug 3, 2018

It would be nice if there was an issue, or I suppose a comment in this PR, describing this change (what new commands are, what are they for, demo output, etc).

import (
"fmt"

"golang.org/x/net/context"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's just "context" now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks - I'll clean this up - I started this work quite a while back and I think the CLI was still using that pattern when I started... just overlooked switching these as I rebased.

}

imageName := options.registryPrefix + "/" + currentEngine
versions, err := containerdutils.GetEngineVersions(context.Background(), dockerCli, currentVersion, imageName, options.all)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pass ctx instead of context.Background() here

// GetContainerdClient verifies the containerd socket is present and writable by the current user
func GetContainerdClient() (*containerd.Client, error) {
// Check for presence first
_, err := os.Stat(ContainerdSockPath)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tisk tisk ;), connect and handle errors on connect os.IsNotExist, not check first. These types of checks are racy

fmt.Fprintf(cli.Out(), "Pulling %s... ", imageName)
ctx := namespaces.WithNamespace(context.Background(), EngineNamespace)

_, err = client.Pull(ctx, imageName, containerd.WithResolver(resolver), containerd.WithPullUnpack)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want progress, you can use content.Fetch from containerd

@dhiltgen
Copy link
Contributor Author

dhiltgen commented Aug 3, 2018

shouldn't we re-use the same "output formatting" when pulling images, etc… right now, it's telling me it's pulling something, I have no idea of the progress… We could use docker or containerd progress formatting.

Good suggestion. Let me take a look at what it would take to wire that up.

should /etc/docker path be configurable ? For example, my system's /etc/ is read-only (managed by nixos), and I want to test/run this PR, couldn't I be able to specify where it's gonna write the configuration ?

Yes, that was the intention but it seems you found a bug - I'll investigate this further and figure out what's going wrong and add a test case for this.

it seems there is a need to login before having a docker daemon, and this is a problem as docker login needs a deamon — on this, we may want to ask for login(s) whenever the command the require them (listing license, …), and later on issuing a docker login on the newly bootstrapped daemon (programatically). The login client-side only seems hackish to me.

If you're just deploying a community engine from hub it works without a login. If you're using an alternative registry (e.g., a DTR behind a firewall) then you might need to login to pull images. If you're initializing an enterprise engine from hub you'll need to be logged in as well. I tried to wire this up so it's not mandatory, but so we can support initialization from a private repo if necessary. Does that help address your concern, or are you still concerned with this model?

On update it's possible to change the version and the registry-prefix but not the engine-image, what is the rationale for that ?

Good point. That wasn't intentional. I forgot to add the image flag to the update flows when I added it recently to the init flow.

It would be really good to have unit tests on the engine package

I'll see what I can do. :-)

I feel like containerdutils package should be some sort of "wrapper" client that get's created with our "view" of docker and containerd client. That way we wouldn't need to pass them for each and every exported functions.

Let me coordinate with @crosbymichael and @dmcgowan and see what we can come up with.

In general, command.Cli should be only used in cmd/* packages.. having a dependency on it in the containerdutils package seems like a smell…

👍 I'll clean this up.

This is a public facing repository, that is easily use as library… and in this context, any exported element (function, struct) should be considered stable.. So the less we exported at first, the better… This is not the case right now.

Will do - I'll reduce the set of exported components in my next refactoring pass.

github.com/docker/licensing needs some tests, and it would be very handy if it were using either vndr or golang/dep tool

\cc @mason-fish ^^

@dhiltgen
Copy link
Contributor Author

dhiltgen commented Aug 4, 2018

Functionality needs ownership. Right now all functionality is scoped to a package when it should be scoped to a type.

👍

Errors, especially in new functionality that's added here, should be well typed so they can be properly inspected (e.g. not by checking the message itself).

👍

Interfaces are quite broad, both in the new interfaces being used and the interfaces being passed in to the new functionality. This should be scoped down so, for instance, a function that just needs an io.Writer doesn't need the whole docker CLI object.

👍

The fakeClient approach where there is a struct that does nothing but store configured functions is not very clean and largely redundant. Might as well just make new objects with the needed functionality. Note you can have a base struct that provides "not implemented" sort of behavior and wrap that with specific functionality for what's being tested... but also scoping interfaces down should help a lot here.

I thought I was following existing patterns, but if there's a different preferred model I can adjust the implementation. @cpuguy83 can you point to example's as reference in the code of how you'd like me to structure the test adapters?

@cpuguy83
Copy link
Collaborator

cpuguy83 commented Aug 6, 2018

can you point to example's as reference in the code of how you'd like me to structure the test adapters

If you need to implement many methods, then I'd use something like this:

type FooBar interface {
    Foo() error
    Bar() error
}
type baseFoo struct {}

func (baseFoo) Foo() error { return errors.New("not implemented") }

func(baseFoo) Bar() error { return errors.New("not implemented") }

type withFoo struct {
    baseFoo
}

func(withFoo) Foo() error {
    return nil
}

But I would also try and reduce the things I need to mock out as much as possible, which means defining interfaces locally for the methods you are actually calling rather than using a concrete type or some (large) external interface.

@dhiltgen
Copy link
Contributor Author

dhiltgen commented Aug 8, 2018

It would be nice if there was an issue, or I suppose a comment in this PR, describing this change (what new commands are, what are they for, demo output, etc).

The following is a little dated and some changes have been made to the implementation based on UX feedback, but it shows the basic flow.

Problem Abstract

The Docker EE platform is distributed and the software lifecycle
managed using two different patterns today. This inconsistency leads
to customer confusion, operational inefficiencies, and in some cases
avoidable outages due to user mistakes.

This PR enables the docker engine to run as a privileged container running
on top of containerd. This model also enables a smooth upgrade path from
CE to EE engines for users who choose to do so.

UX Example

The following examples outline how users would interact with the local
engine via the CLI.

Before Initialization

This new version of the docker CLI would be capable of initializing
a local engine if containerd was detected via the socket or
named pipe.

% docker info
Cannot connect to the Docker daemon at unix:///var/run/docker.sock however containerd is running locally
- Set DOCKER_HOST or use the `-H` flag to point to a remote docker engine or cluster
- To initialize a local node, run `sudo docker engine init`

Initializing a Local CE Engine

Provided containerd is present on the local system and the user has the
ability to write to the containerd socket (or named pipe), the CLI would
be capable of starting up the docker daemon.

% docker engine init --help

Usage:  docker engine init [OPTIONS]

Initialize a docker engine on the local containerd

Options:
      --config-file string       Specify the location of the daemon configuration file
                                 on the host (default "/etc/docker/daemon.json")
      --engine-version string    Specify engine version (default '18.09.0')
      --registry-prefix string   Override the default location where engine images are
                                 pulled (default "docker.io/docker")
      --root-dir string          Specify the location of the Docker Root Dir on the host
                                 (default "/var/lib/docker")
% sudo docker engine init
Pulling docker.io/dockereng/ce-engine:18.09.0... done.
Waiting for engine to start... waiting for engine to be responsive... engine 18.09.0 is online.
Success!  The docker engine is now running.

Notes:

  • Tag conventions
    • CLI builds default to install their corresponding engine version
    • YY.MM.P - Immutable version specific tag
    • nightly-YYMMDD - maps to an nightly edge build
  • helper packages would be generated to handle bootstrapping the
    engine via OS level packages to mimic the current packaging behavior, but
    those would leverage these CLI commands under the covers in post-install
    packaging scripts.
  • If the image is already present, it will not be pulled, so offline use-cases
    can be supported by pre-loading the image via OS level packaging.

Activating EE

If the user wants to start running an EE engine they would use the
following command to activate an EE engine with applicable license.
This can be performed after initing a CE engine, or can be done directly
on top of containerd as the first step. If they already have CE running
and workloads on top, those would be preserved.

% docker engine activate --help

Usage:  docker engine activate [OPTIONS]

With this command you may apply an existing Docker enterprise license, or
interactively download one from Docker.  In the interactive exchange, you can
sign up for a new trial, or download an existing license.  If you are currently
running a Community Edition engine, the daemon will be updated to the
Enterprise Edition Docker engine with additional capabilities and long term
support.

For more information about different Docker Enterprise license types visit
http://www.docker.com/licenses

For non-interactive scriptable deployments, download your license from
https://hub.docker.com/ then specify the file with the '--license' flag.

Options:
      --engine-version string    Specify engine version (default is to use existing version)
  -l, --license string           License File
      --registry-prefix string   Override the default location where engine images are pulled (default
                                 "docker.io/dockereng")

The following example shows the UX for a brand-new customer who has never logged into hub

% docker engine activate
Please run 'docker login' with your existing account then re-run 'docker engine activate'
% docker engine activate
You don't have any existing Docker licenses
0) Generate a new trial license for EE Basic (individual engines)
1) Generate a new trial license for EE Advanced (cluster support)
Please pick the license above by number (0): 0

Pulling docker.io/dockereng/ee-engine:18.09.0... done.
Starting the docker engine... waiting for engine to be responsive... engine 18.09.0 is online.
Success!  Your engine has been activated.

Notes:

  • The users hub account would be granted access to the EE engines during their trial
  • The license would be stored locally and may be used to activate specific license enabled features
  • Tag conventions
    • Not visible unless you've paid, or have a trial license
    • $VER - Immutable version specific tag

The following variation shows the UX for an existing customer with some existing licenses.

% docker engine activate

Checking Docker Hub for licenses for dhiltgen... Licenses found.
0) Docker EE Basic- Sat Oct 14 2017
1) Docker EE Advanced, 10 nodes- Sat Oct 14 2017
Please pick the license above by number (0): 1

Pulling docker.io/dockereng/ee-engine:18.09.0... done.
Starting the docker engine... waiting for engine to be responsive... engine 18.09.0 is online.
Success!  Your engine has been activated.
See https://docs.docker.com/ee/ucp/admin/install/ for UCP installation instructions.

Seeing current engine status

The info and version commands should be updated to expose if the engine
is a CE or EE engine, and the status of the license (expired, type of
license, etc.)

% docker info
Containers: 11
 Running: 0
 Paused: 0
 Stopped: 11
Images: 703
...
License: Community Edition Engine
License: Enterprise Standard trial - 20 days remaining.  To purchase go to http://hub.docker.com/renew
License: Enterprise Standard 10 nodes - expires 1/1/2019
License: EXPIRED!  You will no longer receive updates.  Please renew at http://hub.docker.com/renew

Notes:

  • Once expired, the users access to EE patches would be disabled on the hub side.

Detecting Updates are Available

Checking for updates requires scanning hub repos for new tags, and requires
hub credentials, so this should be done using an explicit command, not included
in existing commands like info.

% docker engine check --help

Usage:  docker engine check [OPTIONS]

Check for available updates

Options:
      --all                      Report all versions (older and pre-release versions)
      --patch-only               Only report available patch versions
      --registry-prefix string   Override the default location where engine images are
                                 pulled (default "docker.io/dockereng")
% docker engine check
TYPE        VERSION         RELEASE NOTES
current     18.03.0
patch       18.03.1         http://docs.docker.com/releases/18.03.1-ee.releasenotes
patch       18.03.2         http://docs.docker.com/releases/18.03.2-ee.releasenotes
upgrade     18.09.0         http://docs.docker.com/releases/18.09.0-ee.releasenotes
% docker engine check
(WARNING: your version is no longer supported, consider upgrading)
TYPE        VERSION         RELEASE NOTES
current     18.02.0
upgrade     18.09.0         http://docs.docker.com/releases/18.09.0-ee.releasenotes

Notes:

  • If the EE license has expired, CE updates should be shown to allow "reverting" back to CE
    if you decide to

Updating a Local Engine

% docker engine update --help

Usage:  docker engine update [OPTIONS]

Update the local docker engine

Options:
      --engine-version string    Specify engine version - by default this command will update to the latest
                                 patch for the current engine version
      --registry-prefix string   Override the default location where engine images are
                                 pulled (default "docker.io/docker")
% docker engine update --engine-version 18.03.2
Using tag: 18.03.2
latest: Pulling from docker/ee-engine
57310166fe88: Pull complete
Digest: sha256:1669a6aa7350e1cdd28f972ddad5aceba2912f589f19a090ac75b7083da748db
Updating Local Docker Engine...
Docker Engine is ready.

@dhiltgen
Copy link
Contributor Author

dhiltgen commented Aug 8, 2018

If you need to implement many methods, then I'd use something like this:

Hmm... maybe I'm just not seeing it, but isn't this pattern going to result in a massive explosion in the amount of code necessary to mock out various different scenarios? Wouldn't I then have to create unique types for each scenario? That seems pretty heavy weight. The nice thing about the existing pattern I replicated is it only gets involved in the test code, not the mainline production code, and each test case can easily alter the behavior of any routine.

Do you have a good example in the tree that's following your pattern for test cases I can look more closely at to see how it should map?

Since the comment's now hidden - some more explanation on why I used this pattern is here #1260 (comment)

@dhiltgen
Copy link
Contributor Author

dhiltgen commented Aug 8, 2018

@vdemeester @cpuguy83 I've pushed two new commits that should hopefully address almost all your comments. I still need to add some unit test coverage for the engine package, and I'm still not sure what the preferred mock model is in this tree.

Update: I forgot to mention I haven't made substantive changes to the licensing portion yet, as the upstream licensing lib work is still WIP - hoping we'll have a bump there in the next few days and then I can update the related glue code in this PR.


flags.StringVar(&options.licenseFile, "license", "", "License File")
flags.StringVar(&options.version, "version", "", "Specify engine version (default is to use existing version)")
flags.StringVar(&options.registryPrefix, "registry-prefix", "docker.io/docker", "Override the default location where engine images are pulled")
Copy link
Contributor

@seemethere seemethere Aug 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this make more sense to have it as just docker.io/ and let the engine-image contain the prefix for the registry organization?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This applies to all other uses of the registry-prefix

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this boils down to a question of use-case and when these flags will be used. I was trying to optimize the UX around end-user use-cases to support local mirrors, while enabling developer use-cases.

Specifically, if you're a typical user, I don't think you'll ever use --engine-image - the default behavior for the community engine and enterprise engine scenarios will do the right thing. This flag is really only meant for developers who want to deviate from the standard naming scheme.

Many customers run an internal registry in their firewalled environments, and the intention with the --registry-prefix flag was to give them a flag to establish the prefix where they store all their docker product mirrored images, both community and enterprise engines, and in the future, additional products like UCP and DTR itself.

My concern with splitting it the way you suggest is I think it makes it harder for end-users using local registry mirrors to benefit from the automatic community/enterprise logic. These mirroring customer would always have to pin the exact image name to use a mirror. In this version of the engine, we're using a single image so it seems ~simple, but it's entirely possible as we work on the modularization work in moby (buildkit, etc.) we will start to split the engine into multiple images in the future. When/if we do so, then we'd have to add new flags for developers to select the other component images, but one could see this getting out of control for a typical user. By using a common prefix and letting the CLI select the right image names by default using the common prefix we hide that complexity of multiple component images from the typical end user. It also will make it harder for us to support this mirroring pattern for other products down the road. For example, UCP today doesn't support installing from a mirror, and UCP itself is made up of a dozen plus images already. I'd like to see us follow this pattern so someday you can docker ucp install --registry-prefix mylocalregistry.acme.com/docker ... (or something along those lines) where the registry-prefix is common for all the mirrored content.

Copy link
Collaborator

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

λ ./docker engine init --config-file=/tmp/etc/docker
docker engine init is only supported on a Docker cli with experimental cli features enabled
λ cat ~/.docker/config.json
{
        // […]
        "experimental": "enabled"
}

Getting a weird error on my experimental enabled cli 😝

}

func getLicenses(ctx context.Context, cli command.Cli, options activateOptions) (*model.IssuedLicense, error) {
user, err := licenseutils.Login(ctx, cli, options.registryPrefix)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure Login doesn't need to get the whole command.Cli interface as argument.

}

imageName := options.registryPrefix + "/" + currentEngine
versions, err := client.GetEngineVersions(ctx, dockerCli.RegistryClient(false), currentVersion, imageName, options.all)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really fan of boolean args as it doesn't really provide any insight of what we read (i.e. reading only that line, I have no idea what false mean for this func.)

// filter out our current version
for _, ver := range versions.Patches {
if ver.Tag != currentVersion {
availUpdates = append(availUpdates, containerizedengine.Update{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these append can be extracted in a function I think 😉

activateEngineFunc func(ctx context.Context,
opts containerizedengine.EngineInitOptions,
out containerizedengine.OutStream,
authResolver func(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could definitely be a local type (at least)

type resolver func(ctx Context.Context, index *registrytypes.IndexInfo) type.AuthConfig
// …
activateEngineFunc func(ctx context.Context, optsEngineInitOptions, outOutStream, authResolver resolver) error

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that really, we should just pass it the types.AuthConfig instead and not require the func/wrapper 😉

@@ -126,7 +126,13 @@ func runLogin(dockerCli command.Cli, opts loginOptions) error { //nolint: gocycl

response, err = clnt.RegistryLogin(ctx, *authConfig)
if err != nil {
return err
if client.IsErrConnectionFailed(err) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This flow is getting weirder and weirder to read. (from command.GetDefaultAuthConfig to the last if err != nil line 133). We may want to refactor that a bit 👼


// DoUpdate performs the underlying engine update
func (c baseClient) DoUpdate(ctx context.Context, opts EngineInitOptions, out OutStream,
authResolver func(ctx context.Context, index *registrytypes.IndexInfo) types.AuthConfig) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above types.AuthConfig instead of the resolver func.

return fmt.Errorf("please pick the version you want to update to")
}

imageName := fmt.Sprintf("%s/%s:%s", opts.RegistryPrefix, opts.EngineImage, opts.EngineVersion)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above, if we use opts only for that, we're better of passing the image reference string directly

return &license, nil
}

func getRegistryAuth(cli command.Cli, registryPrefix string) (*types.AuthConfig, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should pass a types.AuthConfig to this package instead of having to define this one. This would remove a lot of package deps (containerizedengine, trust, …)


var (
// SIGKILL maps to unix.SIGKILL
SIGKILL = unix.SIGKILL
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are already defined above (and exported too it seems), We should define them only once and use on both (or… not defined them as exported)

}
*/

// nolint: interfacer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why ignoring the linter here ? I guess it's on client *containerd.Client that could only be client oci.Client (or even a local interface)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it wanted me to change the types to be incompatible with the upstream containerd pattern which I think is wrong.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how is it incompatible with the upstream containerd pattern ? it just says there is a small interface you can "declare your variable with". The other "way around" would be to define a local interface that satisfy the contract you need 😝

Copy link
Contributor

@silvin-lubecki silvin-lubecki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only reviewed the commands yet, still need to review the containeriedengine package.

quiet bool
}

// NewActivateCommand creates a new `docker engine activate` command
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: it should be newActivateCommand

Long: `Activate Enterprise Edition.

With this command you may apply an existing Docker enterprise license, or
interactively download one from Docker. In the interactive exchange, you can
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: . In <- Extra space


With this command you may apply an existing Docker enterprise license, or
interactively download one from Docker. In the interactive exchange, you can
sign up for a new trial, or download an existing license. If you are currently
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extra space . If

support.

For more information about different Docker Enterprise license types visit
http://www.docker.com/licenses
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use https instead?
By the way the link seems to be broken...
Maybe move this part into another PR, ready to be merged as soon as the page is online?

flags := cmd.Flags()

flags.StringVar(&options.licenseFile, "license", "", "License File")
flags.StringVar(&options.version, "version", "", "Specify engine version (default is to use existing version)")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think existing version can be misleading. Do you mean current version? Maybe installed version or running version instead?

}

// UpdatesWrite writes the context
func UpdatesWrite(ctx Context, availUpdates []containerizedengine.Update) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: availUpdates -> availableUpdates

context Context
expected string
}{

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: empty line

{Type: "updateType1", Version: "version1", Notes: "description 1"},
{Type: "updateType2", Version: "version2", Notes: "description 2"},
}
out := bytes.NewBufferString("")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: out := &bytes.Buffer{}

{"Version": "version2", "Notes": "note2", "Type": "updateType2"},
}

out := bytes.NewBufferString("")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above

Status: status,
IdentityToken: token,
}, err

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: empty line


func (c baseClient) pullWithAuth(ctx context.Context, imageName string, out OutStream,
authResolver func(ctx context.Context, index *registrytypes.IndexInfo) types.AuthConfig) (containerd.Image, error) {

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: empty line


distributionRef, err := reference.ParseNormalizedNamed(imageName)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse image name: %s: %s", imageName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are there 2 '%s'? Is it missing an argument?

}
imgRefAndAuth, err := trust.GetImageReferencesAndAuth(ctx, nil, authResolver, distributionRef.String())
if err != nil {
return nil, errors.Wrap(err, "failed to get imgRefAndAuth")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think imgRefAndAuth in the message will help a user...

continue
}
// update status of active entries!
for _, active := range active {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shadowing active with another active variable makes the code not easy to review...

"golang.org/x/sys/unix"
)

var (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const?

if err != nil {
logrus.Debugf("Failed to kill task: %s", err)
}
_, err = task.Delete(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if _, err := ... ; err != nil{

logrus.Debugf("Failed to delete task: %s", err)
}
err = container.Delete(ctx, containerd.WithSnapshotCleanup)
if err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if err := ... ; err != nil{

configFileData := buf.Bytes()

var config map[string]interface{}
err = json.Unmarshal(configFileData, &config)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if err := ... ; err != nil{

return "", errors.Wrapf(err, "malformed config file - %s", configFile)
}
rootDir := defaultDockerRootDir
tmp, defined := config["data-root"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if data, defined := config["data-root"]; defined{
    rootDir = data.(string)
}

imageName := fmt.Sprintf("%s/%s:%s", opts.RegistryPrefix, opts.EngineImage, opts.EngineVersion)

// Look for desired image
image, err := c.cclient.GetImage(ctx, imageName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pattern seems to be the same as in baseClient.InitEngine, it could be extracted and factorized.

@dhiltgen
Copy link
Contributor Author

Getting a weird error on my experimental enabled cli

Hmm... I'm a bit confused by what's going on here. I tried switching the Annotations from "experimentalCLI" to "experimental" but that seems to break it worse. I couldn't find any docs covering this, so maybe I've got some other piece of wiring hooked up incorrectly. Any pointers?

@dhiltgen
Copy link
Contributor Author

@vdemeester @cpuguy83 I've added one commit that refactors one of the test mocks using the model that I think you are proposing. - 9d73826 Please take a look at that and let me know what you think.

My preference is the original model as I think its easier to reason about, but if this is how you'd like the tests structured, I'll apply it to the other mocks in a follow up commit.

Copy link
Collaborator

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • pkg/containerized can be move to internal too 👼
  • Any commented code should be removed (no need for it for now)

)

func getRegistryAuth(cli command.Cli, registryPrefix string) (*types.AuthConfig, error) {
fullName := registryPrefix + "/" + containerizedengine.EnterpriseEngineImage
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems wrong — the EntrepriseEngineImage there 🤔 It's gonna get use in docker engine init too which by default use the Community image — we should pass the image too (and thus probably, at that point, just a reference string 👼)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, if no registryPrefix, the reference will be /something right (looking at the default value in options), and thus an invalid reference, isn't it ?

// This client can be used to manage the lifecycle of
// dockerd running as a container on containerd.
func NewClient() (Client, error) {
cclient, err := containerd.New(containerdSockPath)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One question on that, we don't want to allow customization of the containerdSockPath when engine init ?


err := updateNonActive(ctx, ongoing, cs, statuses, keys, activeSeen, &done, start)
if err != nil {
continue outer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this return an error instead ? 🤔 (this would make outer: not required anymore)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This algorithm has been structured this way for the past year (or more) in the containerd tree, so i'm a little hesitant to start changing it significantly without understanding all the partial failure modes.

package updates

// Update stores available updates for rendering in a table
type Update struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vdemeester
Copy link
Collaborator

In addition, not a huge fan of the proposed test refactoring, and as most of the code is under internal now, I feel we can easily do some refactoring (both on tests and part of the code) in follow-up PRs 😉

@dhiltgen
Copy link
Contributor Author

In addition, not a huge fan of the proposed test refactoring, and as most of the code is under internal now, I feel we can easily do some refactoring (both on tests and part of the code) in follow-up PRs

Sounds good. Thanks!

I'll drop the test refactoring commit, make the other requested changes, get a vendor bump of the licensing lib and add some unit test coverage on the licensing side, and get that pushed later today. Hopefully that will get us in a state where this is mergeable with only minor comments, then I'll submit a follow up clean-up PR after FC to address those.

@dhiltgen dhiltgen force-pushed the ce_q3 branch 2 times, most recently from c00bfd9 to 5a95df7 Compare August 15, 2018 21:21
Copy link
Collaborator

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm getting some weird errors :thinking:

λ sudo ./build/docker-linux-amd64 --config=/home/vincent/.docker engine rm
runtime for task io.containerd.runtime.process.v1: rpc error: code = NotFound desc = unknown runtime "io.containerd.runtime.process.v1": unknown
λ sudo ctr -n docker c ls
CONTAINER    IMAGE                                            RUNTIME
dockerd      docker.io/docker/engine-community:18.09.0-dev    io.containerd.runtime.process.v1

Also, on update, I was "waiting" for a list to choose from 😝

λ sudo ./build/docker-linux-amd64 --config=/home/vincent/.docker engine update
please pick the version you want to update to
λ sudo ./build/docker-linux-amd64 --config=/home/vincent/.docker engine update --version 18.09.0-dev
failed to lookup task: runtime for task io.containerd.runtime.process.v1: rpc error: code = NotFound desc = unknown runtime "io.containerd.runtime.process.v1": unknown

Also, the first time I did the init, it told me the daemon is responsive really quick because I already had a daemon running… didn't really check the daemon answering was the same it spawned (using version or sthg) — there is not too much log in debug mode so it's a bit tricky to know what going wrong 😓

}
return client.InitEngine(ctx, options.EngineInitOptions, dockerCli.Out(), authConfig,
func(ctx context.Context) error {
client := dockerCli.Client()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't take the engineInitOptions into account at all — i.e. if the the engine init configuration file changes the default host the daemon listen on, … — it will try to ping the default docker daemon. This kinda force to pass -H to the docker engine init command in those cases.

Copy link
Contributor Author

@dhiltgen dhiltgen Aug 20, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the scenario where the daemon config is set up to listen on a non-standard sock, or only tcp, the user should also set their DOCKER_HOST to match, however I agree we should attempt to detect a misconfiguration here so we can give a good error message if they're wrong.

Copy link
Collaborator

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, managed to make it work 👼 🎉
So there is still a few comments to take into account (some errors.Wrap and some more complex ones), but as said before, as this is experimental and most of the code happens in internal package, I'm fine doing those as follow-ups 😉
LGTM 🐯

Daniel Hiltgen added 2 commits August 20, 2018 09:42
This new collection of commands supports initializing a local
engine using containerd, updating that engine, and activating
the EE product

Signed-off-by: Daniel Hiltgen <[email protected]>
@dhiltgen
Copy link
Contributor Author

Squashed to 2 commits (vendor bump, and the impl) and rebased on master so it should be ready to merge once CI goes green.

@crosbymichael
Copy link
Contributor

LGTM

@crosbymichael crosbymichael merged commit 0c444c5 into docker:master Aug 20, 2018
@GordonTheTurtle GordonTheTurtle added this to the 18.09.0 milestone Aug 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants