Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I want to be able to run, and build Linux and Windows (LCOW) containers simultaneously. #79

Open
bplasmeijer opened this issue Apr 29, 2020 · 66 comments
Assignees
Labels
community_new New idea raised by a community contributor docker_desktop Improvements or additions to Docker Desktop

Comments

@bplasmeijer
Copy link

Tell us about your request
I want to be able to run and build Linux and Windows (LCOW) containers simultaneously.

Which service(s) is this request for?
docker build, and docker-compose

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?
At this moment, I am using the (LCOW) experimental feature to run Linux and Windows containers simultaneously. The option to work streamless with Linux and Windows container is hard, because of the switch context option.

Are you currently working around the issue?

  • Switch context when building Linux images.
  • Switch setting back to windows containers, and pull Linux image in (LCOW) mode.

Additional context
The new 20H1 WSL docker integration is a great new feature, and I love it, but integration requires Linux mode. I do lose my windows context.
Issue #78 describes building both Linux and windows images in parallel on windows.

LCOW is great but lingering in limbo. There are many solutions where Windows and Linux containers work together.

@stephen-turner
Copy link

See #78 for more details.

@0x53A
Copy link

0x53A commented May 5, 2020

For me the important part is just building. Yeah, running both in parallel would be nice, but not strictly required.

Our dockerfiles don't do anything fancy, just copy a folder, set an environment variable and set the entry point.

So I'd also be happy with some other tool. Maybe with WSL2 we can invoke buildkit through WSL or something, idk.

@thomasoatman
Copy link

Hi. I am working on moving our build and testing environment to Docker Containers.
We ship Linux and Windows components and thus need support for both.
During testing, we require a Windows client container talking to a database in Linux container.
So the lack of dual support is killing my efforts.
thank you

@erdtsieck
Copy link

We're bound to windows containers due to the use of 'windows authentication' in some of our services and because of some SLA's. We do use some linux containers like consul and traefik. Not being able to mix Windows and Linux containers prevents our team from starting to use WSL2.

@MathiasMagnus
Copy link

I am working on upgrading our CI process to allow testing locally. Both GitHub Actions and GitLab CI don't allow running pipelines locally (of course, they weren't designed to do that). Generally I was advised to move away from the feature set of any CI runner and put as much logic into (shell) scripts as possible. Heeding that advice, I'm working on a CMake module (because all of our build automation is CMake anyway) which is able to run CI tasks locally.

Pipeline definition for GitLab and GitHub remain YAML files which do not much more than define the test matrix, distribute the work to runners and invoke a single script on every pipeline stage. The CMake module is an auxiliary project commited into version control.

  • CMake configuration step allows customizing the "local runner", selecting some subset of the test matrix one wishes to test for.
  • The build step builds the images (from Dockerfiles from within the repo), ideally in parallel.
  • CTest runs the same shell scripts which ordinary CI does (actually, they are CMake scripts too, invoked as cmake -P Build.cmake for eg.) in the correct order with dependency tracking inside Docker containers. Tests again ought to run in parallel. (Tests would get labels based on which test matrix elements they inherit, one could select some subset of the test matrix through ordinary CTest CLI args.)

It would be rapid response CI runs without involving remote entities, possibly even run totally offline.

I may be able to orchestrate the entire thing having CMake communicate with two daemons, one locally inside Windows and one running inside WSL2, but setup becomes way more complicated, not to mention having to bind the lifetime of the Linux daemon to a long running Windows process (service?), instead of it being the same daemon which is running on the Windows host anyway.

@profblackjack
Copy link

My org has .net framework applications I'm looking to containerize, and I would love to be able to mix Linux and windows containers on dev machines for a local development environment more reflective of our deployed environments

@MarkLFT
Copy link

MarkLFT commented Dec 30, 2020

I also would find it extremely useful to be able to run both Windows and Linux containers on a single host. We have many scenarios where it is required we run both types of containers in a single installation.

@derWallace
Copy link

For my org using this feature would help us quite a lot as well. We have some legacy components which require to mix Linux containers with Windows containers. Manually switching on the Docker Desktop between Linux containers and Windows containers all the time is quite cumbersome for our developers.

@rompic
Copy link

rompic commented Jan 11, 2021

For my org using this feature would help us quite a lot as well. We have some legacy components which require to mix Linux containers with Windows containers. Manually switching on the Docker Desktop between Linux containers and Windows containers all the time is quite cumbersome for our developers.

+1

Afaik it is possible via the DockerCLI

@bplasmeijer
Copy link
Author

@nebuk89 any update?

@Seramis
Copy link

Seramis commented Mar 31, 2021

Any updates?

@nebuk89
Copy link
Contributor

nebuk89 commented Apr 22, 2021

Ah sorry for being quiet, no updates yet I am afraid 😞. Please keep everyone following/providing the 👍 so we can see the interest!

@silverl
Copy link

silverl commented Apr 23, 2021

Another +1 here. We have a mix of Windows .NET Framework services and IIS web sites, and Linux services, and would LOVE to be able to run both on a single dev workstation.

@bplasmeijer
Copy link
Author

Ah sorry for being quiet, no updates yet I am afraid 😞. Please keep everyone following/providing the 👍 so we can see the interest!

We need some movement on #79 here, see also microsoft/Windows-Containers#34

@slonopotamus
Copy link

Am I missing something?

Can these containers talk over network to each other?

@maikschulze
Copy link

maikschulze commented Dec 30, 2021

Can these containers talk over network to each other?

Yes, that seems to work. I have started Python web servers in the Linux and Windows containers and exposed their ports. The containers can connect to the other container's server and are reachable through localhost from the host. I cannot find any difference in the behavior in comparison to using the actual "Switch to Linux/Windows containers" in Docker Desktop. What also worked was elevating rights through --cap-add=NET_ADMIN for the Linux container.

I have also tested volumes and mounts successfully.

The behavior was identical in both Linux and Windows mode. I'm unsure what the switch actually serves apart from setting the default docker context, e.g. docker context use <context>, and some GUI setting in Docker Desktop.

For completeness, you can avoid the explicit creation of the contexts and use the environment variable DOCKER_HOST to point to npipe and get the same result, see https://docs.docker.com/engine/reference/commandline/cli/#environment-variables

@ohault
Copy link

ohault commented Jun 10, 2022

Will the switching setting in Docker Desktop for Windows be removed soon in a future release (5.0)?

@tomatac
Copy link

tomatac commented Jul 27, 2022

Is running a mix of Windows and Linux containers from a docker-compose on a Widows host stable/supported?

@christophermclellan
Copy link
Collaborator

christophermclellan commented Sep 16, 2022

Hi folks, just to reiterate @stephen-turner's comments from 20-August 2021 and 08-June 2021, this is not something that we are working on atm or currently have planned in on our roadmap.

@abehartbg
Copy link

For anyone who runs across this @maikschulze solution will probably work for most use cases. Only thing I've noticed is that both Linux and Windows need to be 'switched to' at least once to have the corresponding file handles available. I do this with this command & $env:ProgramFiles\Docker\docker\DockerCli.exe -SwitchLinuxEngine

@maikschulze
Copy link

Great to hear this is working for you @abehartbg . I've made a small startup script sequence where I switch between the two engines and wait a few seconds in between.
Ever since, I've been using this approach successfully in a CI production environment. In Gitlab you can configure this setup easily:

[[runners]]
  name = "<LINUX-NAME>"
  executor = "docker"
  ..
  [runners.docker]
    host = "npipe:////./pipe/docker_engine_linux"
    ..

[[runners]]
  name = "<WINDOWS-NAME>"
  executor = "docker-windows"
  ..
  [runners.docker]
    host = "npipe:////./pipe/docker_engine_windows"
    ..

@maikschulze
Copy link

Hi,

I have one additional piece of advise if you use the simultaneous approach based on $env:DOCKER_HOST. Please, make sure to operate in 'Linux container' mode when using volumes. You cannot mount Windows paths to Linux containers if you are Windows mode, regardless of the DOCKER_HOST value you set.

What works in Linux mode:

$env:DOCKER_HOST="npipe:////./pipe/docker_engine"
docker run --rm -it -v c/:/windows_mount alpine ls /windows_mount
$env:DOCKER_HOST="npipe:////./pipe/dockerDesktopLinuxEngine"
docker run --rm -it -v c/:/windows_mount alpine ls /windows_mount
$env:DOCKER_HOST="npipe:////./pipe/dockerDesktopWindowsEngine"
docker run --rm -it -v c:/:c:/windows_mount mcr.microsoft.com/windows/servercore:20H2-amd64 powershell -c ls c:/windows_mount
$env:DOCKER_HOST="npipe:////./pipe/docker_engine_windows"
docker run --rm -it -v c:/:c:/windows_mount mcr.microsoft.com/windows/servercore:20H2-amd64 powershell -c ls c:/windows_mount

What does not work in Linux mode:

$env:DOCKER_HOST="npipe:////./pipe/docker_engine_linux"
docker run --rm -it -v c/:/windows_mount alpine ls /windows_mount

I tested this with Docker Desktop 4.6.1 on Windows 10 20H2 and with Docker Desktop 4.6.1 on Windows 11 22H2 (with servercore:ltsc2022 images)

@TBBle
Copy link

TBBle commented Dec 2, 2022

This is because it's the Docker Desktop intervening proxy (the thing at npipe:////./pipe/docker_engine) that rewrites Windows paths to work with the Linux engine before passing the request on to npipe:////./pipe/docker_engine_linux when in Linux Containers mode, or pretty-much just passes it straight through to npipe:////./pipe/docker_engine_windows when in Windows Containers mode.

You probably could make

$env:DOCKER_HOST="npipe:////./pipe/docker_engine_linux"
docker run --rm -it -v c/:/windows_mount alpine ls /windows_mount

work, but using a WSL2-visible path instead, e.g., something like

$env:DOCKER_HOST="npipe:////./pipe/docker_engine_linux"
docker run --rm -it -v /mnt/c/:/windows_mount alpine ls /windows_mount

assuming the WSL2 backend. If that works, it'd also work if Docker Desktop was in Windows Containers mode, since using that DOCKER_HOST value, you're bypassing Docker Desktop entirely.

Note that I haven't checked this, it's possible that the WSL2 backend setup in Docker Desktop relies on Docker Desktop to do other things than rewrite the paths in order to mount volumes from Windows.

@maikschulze
Copy link

Thank you very much for this information!

I just tried in Windows mode and Linux mode to mount like this

$env:DOCKER_HOST="npipe:////./pipe/docker_engine_linux"
docker run --rm -it -v /mnt/c/:/windows_mount alpine ls /windows_mount

The container starts but no files or directories are listed. I would imagine that in addition to the path rewrite, some kind permission management is also done.

@TBBle
Copy link

TBBle commented Dec 5, 2022

From memory (I poked deep into it using nsenter at one point), the dockerd process in WSL2 is running inside container, and so the relevant mount point where the Windows partitions are mounted might not be /mnt/c/, and it may be that as you suppose, the Windows partitions are not mounted inside that container until Docker Desktop sees that a path is needed by a mount request it's rewriting.

That'd be a good security practice, in fact.

@andrewtek
Copy link

I am supporting some legacy ASP.NET applications that use SOLR. I would like to create a single docker-compose that uses windows containers for the ASP.NET components and Linux for the SOLR components.

I have done some experiments running Linux containers from WSL2 while running Windows containers from Visual Studio while Docker Desktop is in Windows mode. I can access both containers (yeah), but I want a single docker-compose that spins up my entire environment with a single button-press.

Any chance this will happen?

@mloskot
Copy link

mloskot commented May 2, 2023

@andrewtek

I want a single docker-compose that spins up my entire environment

Have you experimented with this by any chance?
https://devblogs.microsoft.com/premier-developer/mixing-windows-and-linux-containers-with-docker-compose/

@thaJeztah
Copy link
Member

@andrewtek that article describes the experimental LCOW support, which was deprecated and removed from the runtime

@mloskot
Copy link

mloskot commented May 2, 2023

@thaJeztah Thank you for the time saving clarification. I was about to try it out myself - I should have clarified I have not yet.
@andrewtek Forgive confusion.

@thaJeztah
Copy link
Member

Oh! Sorry I now see I @-mentioned the wrong person (sorry for that) 🙈

@andrewtek
Copy link

Thanks for the update @mloskot and @thaJeztah . I did try out that article, but it did not work for me. As @thaJeztah said, the feature was deprecated.

For my situation, I found that I could not run SOLR using LCOW, but it runs great using WSL2. I love that SOLR has done so much to help the community with pre-made images and documentation to get up and running,
https://solr.apache.org/guide/solr/latest/deployment-guide/solr-in-docker.html

Unfortunately, if you need to work on an ASP.NET Framework application, you cannot benefit from those SOLR images. Is there a technical reason why the deprecated feature cannot be re-implemented with WSL2?

@thaJeztah
Copy link
Member

The deprecated feature used a native Windows daemon to run Linux images. For that, it created a Hyper-V VM for each container it started (effectively Linux containers were implemented as Virtual Machines).
The code that implemented that was contributed by Microsoft, and was intended to be a POC; the implementation was only partially completed, and the technical design for it had many issues (some of which were known when the implementation started), but was meant to be a temporary implementation with the intent to move the abstraction to lower level runtimes (containerd, and the OCI runtime).

Some of that work has been done in containerd (see https://github.com/containerd/containerd/pulls?q=is%3Apr+lcow+ for pull requests), but may not yet be completed.

There was a lot of complication involved, because Windows-native code in the Docker Engine had to (conditionally, depending on the container's image being either Linux or Windows), try to follow Linux semantics using Windows library-code from the Golang runtime (e.g. filepaths to follow Linux semantics/validation instead of Windows).

Various parts of that complexity also affected other (non-experimental, non-Windows, non-LCOW) codepaths, and the underlying Windows APIs used for the initial PoC were deprecated (and unmaintained) by Microsoft.

The net-result of all of that was that the LCOW code affected maintainability and stability of the overall code-base, which made us decide to remove it.

We don't have plans to (try to) implement this using WSL2 (this would be a significant amount of work, and likely involve "implementing containers as WSL2 VMs"), but possibly this (or a comparable) feature could find it's way back through the LCOW implementation in containerd once completed. This may still be a significant amount of work, and as of today has not been budgeted, so I can't give any estimates on that. Work for this would largely need to be done in the upstream "Moby" repository (https://github.com/moby/moby), which is fully open-source, so community participation / contributions are still an option for that.

@TBBle
Copy link

TBBle commented May 3, 2023

I don't have the API specs in front of me, but if the Docker API had something that could identify the platform of a particular request, e.g., like the --platform flag for old LCOW, then a proxy (such as Docker Desktop, cough) could be used to split such requests between the native Windows dockerd and the WSL2-hosted dockerd. Right now Docker Desktop switches mode to support this same split one at a time, but both daemons are running.

I also confess I'm not really famliar with Docker Compose, can it be told that different parts of the system run on different dockerd instances, automating some of the ideas earlier in this thread? I'm not sure if it'd be feasible for, e.g., a backend private network shared between a Windows container and a Linux (WSL2-hosted) container, but I'm not sure it wouldn't be doable. (It also might be more complexity than just waiting for "New" LCOW to get into people's hands; my understanding is that it's workable on the containerd side now, but for Docker users it also depends on containerd-on-Windows and also further front-end work to enable it; I also haven't tried it myself, so might be overassuming its current state)

@davhdavh
Copy link

Why not just start both the windows container mode and wsl2 container mode at the same time?
Then automatically configure the two in a swarm with some appropriate labels and 99% of the problems goes away.
Then if i docker run from wsl, it will talk to the linux engine, and if I docker run from powershell it will talk to windows engine, and docker-compose can use the labels (or platform:, or just image) to start on the proper engine.

@TBBle
Copy link

TBBle commented May 10, 2023

You can do roughly that now, using Docker contexts per #79 (comment) (and see #79 (comment) and the following few comments for caveats and notes). I'm not sure if Docker compose can work across contexts though, or just talks to the current context selected with docker context use.

@davhdavh
Copy link

Yes, that was kinda my point. It just needs to be the default or a checkbox to enable. That will send a clear signal to other tools to properly support it also

@wizpresso-steve-cy-fan
Copy link

wizpresso-steve-cy-fan commented Sep 7, 2023

We are also interested to see if LCOW can be upgraded to support WSL2 as a backend.

The majority of our container applications right now is Windows based, but some of GPU workloads are based on Linux Container.

We can also confirm that #79 worked to some extent but we don't know how to keep both Windows and Linux Docker daemon running simultaneously without using & $env:ProgramFiles\Docker\docker\DockerCli.exe -SwitchWindowsEngine and & $env:ProgramFiles\Docker\docker\DockerCli.exe -SwitchLinuxEngine

PS: LCOW support is deprecated but not entirely remove AFAIK, and you can still turn that on with "experiment": true. Obviously however, there doesn't seems to have GPU support unlike WSL2 docker. It seems like it is removed after 23.0.0 which is half a year ago.

@TBBle
Copy link

TBBle commented Sep 7, 2023

The old LCOW setup is removed from Docker in 23.0, and was already broken in various ways for years before that.

The new LCOW approach (Sometimes "LCOW v2", but I'm not sure if that's ever been used in a formal sense) is actually the same underlying tech as WSL2 (micro-VMs), but getting it into Docker depends on getting Windows Docker to be based on containerd, which has been a multi-year project already: moby/moby#41455 which I've apparently already referenced here in 2021, and others linked from there. (Also, I've lost track of what outstanding LCOW-specific work is still TBD in containerd itself. I believe it's been working in branches, but I'm not sure if they have all been merged into containerd yet.)

Once that's done, we will be pretty close (in a tech-stack sense, not in the "rainy afternoon's hacking" sense) to getting what this ticket wants: because containerd doesn't have "Windows" and "LCOW" modes, each container (or sandbox, really) can use a different runtime. Inter-container behaviour between LCOW and Windows containers will probably still have limitations, but being able to build Linux and Windows images in a single Docker daemon without changing modes would only require work in Docker itself, e.g. the UX could be reinstating the old (LCOW) --platform parameter for build and execution commands, but the implementation would be quite different underneath.

This might also open the door for Compose using a mix of platforms (which I don't think worked with LCOW v1), but I don't know enough about Compose to say anything further on that topic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
community_new New idea raised by a community contributor docker_desktop Improvements or additions to Docker Desktop
Projects
None yet
Development

No branches or pull requests