-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Support for multi-arch #63
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Welcome @colek42! |
Hi @colek42. Thanks for your PR. I'm waiting for a kubernetes-csi member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: colek42 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
i signed it |
it looks like we can use buildx with travis. https://medium.com/@quentin.mcgaw/cross-architecture-docker-builds-with-travis-ci-arm-s390x-etc-8f754e20aaef |
ref: ceph/ceph-csi#671 |
/ok-to-test |
The proposal makes sense to me in principle. But there are some details to clarify:
|
#export DOCKER_CLI_EXPERIMENTAL=enabled | ||
#docker run --rm --privileged docker/binfmt:66f9012c56a8316f9244ffd7622d7c21c1f6f28d | ||
#docker buildx create --use --name mybuilder | ||
#docker buildx build -t colek42/csi-node-driver-registrar --platform=linux/arm,linux/arm64,linux/amd64 . --push |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docker Buildx
is included in Docker 19.03 see https://docs.docker.com/buildx/working-with-buildx/ also we need to make sure that are we able to push this image to quay.io as csi is using quay.io
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like Quay supports the manifest API
ref: https://coreos.com/quay-enterprise/releases/#1.15.2
ref: https://docs.docker.com/registry/spec/manifest-v2-2/
- Does that also work in Prow or only in TravisCI? We wanted to move away from TravisCI, it would be a step back to add more reasons why we have to use TravisCI.
I don't understand Prow, where can I find more info about it? If Prow has a current verion of Docker and allows us to use buildx it should work.
- Where and how do we choose the Go version? Right now it's specified in exactly one place (release-tools/travis.yml) and more precise with major.minor.patch version than the
FROM golang:1.13
in the Dockerfile.
If we are building in the container, I think it makes sense to specify the Go version in the
Dockerfile.
- Do we keep "make build" as-is (i.e. build with Go on the host) or do we replace it with building in Docker? This is mostly relevant for developers.
I always reccomend building in a container for reproducabilty. If developers need the same functionality we can have the Makefile call docker build
and mount a local folder into /bin
- Do we need a
.dockerignore
file? Without it,ADD . /app
may end up copying quite a bit of data from an unclear work directory when invoked by a developer. In PMEM-CSI, we blacklist everything and then whitelist just the parts that are needed for building (https://github.com/intel/pmem-csi/blob/devel/.dockerignore). This is sometimes a nuisance to maintain, though.
The pattern in PMEM-CSI looks good to me, I don't mind doing it either way - the tradeoffs either way are pretty minimal. Because we are using a multi-stage buid the end result should be the same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand Prow, where can I find more info about it? If Prow has a current verion of Docker and allows us to use buildx it should work.
buildx depends on QEMU for simple multi-platform support, right? That may create additional dependencies for the host that the job runs on. Prow is the CI for Kubernetes, running on Kubernetes itself: https://github.com/kubernetes/test-infra/blob/master/prow/README.md
If we are building in the container, I think it makes sense to specify the Go version in the
Dockerfile.
I don't mind that. I was just pointing out that if we move it, then further work is needed to adapt the other code that currently reads the Go version from travis.yml. My concern also was that 1.13 is less precise than what we are currently using. I don't know whether it matters in practice, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
currently, quay.io doesn't support multi-arch see ceph/ceph-csi#707 (comment).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pohly Since quay doesn't support multi-arch, could there be a possibility to move to gcr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that has been the goal for a while now. We haven't done it yet because it wasn't clear how Prow jobs would get access to the necessary credentials for pushing to gcr. I believe that has been sorted out now.
Can you check whether docker buildx
really works in a Prow job for multiple architectures? You can change the release-tools/build.make
for that (for example, let test
depend on a new test-buildx
which runs the command), then we'll see during the next pull-kubernetes-csi-node-driver-registrar-unit
run whether it succeeds.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'd like to get multi-arch builds off of prow and onto the GCB based image auto push infra so as to not need to register quemu on the nodes (and avoid different jobs stomping it).
I'm personally too oversubscribed for this, but the CNCF infra is generally moving that way. @justaugustus and @Katharine have some work here.
@mkumatag: GitHub didn't allow me to request PR reviews from the following users: claudiubelu, BenTheElder, listx. Note that only kubernetes-csi members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
You'll have to specify what kind of information you need from me. :) But I'll assume it's Windows-related. It seems that you're only taking into account the Linux container images. The Dockerfile is different due to a few reasons (see And from what I understand, you want to move the build process from the host level to the container level (using As far as binary building goes, it seems that But there's a concern I have at the moment. Currently, from what I've seen in the But even so, we'll still have to make an assumption: that the Windows build node is able to build container images for 1809, 1903, and 1909, which means that it has to have Hyper-V enabled, it's docker to have To be fair, I'm not too familiar with |
I don't think buildkit supports Windows period. ref: moby/buildkit#616 Even if it does in the near future it is likely to complicate the PR significantly. I have customers today looking for ARM CSI support. I think the initial hurdle is moving to prowl, and gcr. When that is taken care of adding Multiarch support will be straight forward. Proposed way forward:
|
I also would like to see this for |
sorry but I just do not have bandwidth for this right now. for image pushing please sync up with k8s-infra-wg and look at the image pushing jobs https://github.com/kubernetes/test-infra/tree/master/images/builder, this tooling can be used with a wg provided GCR / GCB / ... project, on which quemu etc. should work fine. Other members of the kubernetes release group have experience with this too and use this pattern for other official images. /uncc |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale Instructions on configuring image building/pushing via Prow + GCB can be found here: https://github.com/kubernetes/test-infra/tree/master/config/jobs/image-pushing kube-cross image building is a decent example to crib from: https://github.com/kubernetes/release/tree/master/images/build/cross Let me know if you need additional help and feel free to assign me in PRs if you need reviews! :) |
If the original author can't continue this, I could add the necessary bits, like I've done here: kubernetes-sigs/secrets-store-csi-driver#189 |
@claudiubelu -- I'd say give it a few days to see if there's a response before taking it over. |
@claudiubelu I'd like to take a shot at this. Right now I am aiming for a Tuesday PR. Thank you for the ref to your work. |
Great! For the most part, you can build the images in the same way I did in the PR. One key aspect is the Just keep in mind to also submit the PR which creates the prow job, like this one: kubernetes/test-infra#17335 |
Hey everyone, thanks for all the attention on this. We have already been actively working on this, and have some experimental builds already going in k8s-staging-csi. Please reach out to @vitt-bagal @pohly @jingxu97 and @dims to coordinate efforts. |
@msau42 could you provide a reference to the k8s-staging-csi repo please. |
We have a WIP cloudbuild and the prow job and prow status |
Can we move over tracking of multi-arch builds to this issue: kubernetes-csi/csi-release-tools#86 The problem is not specific to |
On the other hand, this issue has more context. I'm undecided... |
This PR... I was confused. We really shouldn't use a PR to discuss progress that is being made elsewhere now. I suggest we close this. |
I am quite confused about who is handling what at this point. It seems like most of the work has been done. |
There have been various people who experimented, most recently @vitt-bagal and @namrata-ibm. I'm now trying to get it from "POC" to "usable in practice". #84 hopefully will be the last PR before moving the common code and files into One noteworthy difference compared to this PR: we can't do multi-stage builds with native compilation for Windows, which diminishes the value of doing it for Linux because maintaining two different ways of producing images isn't nice. Therefore #84 keeps building binaries on the build host outside of Docker and then just puts those into the image with a simple Dockerfile + COPY. |
@pohly That seems appropriate. Feel free to ping me if the team decides to move in another direction |
You don't need native building in the first place. You can do something like this: https://github.com/kubernetes-sigs/secrets-store-csi-driver/blob/master/windows.Dockerfile As you can see in the file, the RUN part is in the Linux stage, and then the binary is just copied over inside the Windows image. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Support CSI on multiple arch
Special notes for your reviewer:
I am proposing that we move the build logic out of the makefile and into the multi-stage Dockerfile. This will help us support multiple archtectures easily using
docker buildx
and ensure the build environment is reproducable.This PR is a POC on how that may work. Additional changes to the build process will be required. See https://hub.docker.com/r/colek42/csi-node-driver-registrar/tags for the built images
ref: #55 #48
Does this PR introduce a user-facing change?: