-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kaniko builds fail on k8s GitLab CI runner when /etc/ssh is mounted #1353
Comments
I just tested using a FROM image without |
@JacobHenner Can you please specify your mount definition in the pod spec and the debug logs? |
Looks like, kaniko is missing the fact that |
volumeMounts:
- mountPath: /builds
name: repo
- mountPath: /volume/obscuredA
name: obscuredA
- mountPath: /volume/obscuredB
name: obscuredB
- mountPath: /volume/obscuredC
name: obscuredC
- mountPath: /etc/ssh
name: ssh
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-vnb9h
readOnly: true volumes:
- configMap:
defaultMode: 420
name: ssh
name: ssh |
This issue is still occurring in the latest image. I am trying to track it down. Kaniko's logs indicate that I am not sure what the expected behavior is here, since I'm having some difficulty following how kaniko is manipulating these whiteout files. As far as I know, they are created by Kaniko or hidden in some layer of the Dockerfile's parent image, as they are not present in the running parent image or the running GitLab executor that's calling kaniko. What is the expected behavior (at a technical level, beyond it working)? |
Actual behavior
When kaniko is used to build an image within a GitLab CI Kubernetes executor, the build fails with the following error if
/etc/ssh
is mounted within the executor (the Kubernetes pod created by GitLab for the build).However, if I change the mount path to something else (e.g.
/etc/ssh.bak/
), this error does not occur.I wonder if this is due to the presence of /etc/ssh within the FROM image itself. Perhaps if
/etc/ssh.bak
existed in the image, it'd fail too? I'm not sure why the mounts on the system where kaniko is executing would interact with the paths in the container image.Expected behavior
I expected the kaniko build to work successfully, without being affected by the mounts within the Kubernetes pod where kaniko is running.
To Reproduce
/etc/ssh
Additional Information
gcr.io/kaniko-project/executor@sha256:d60705cb55460f32cee586570d7b14a0e8a5f23030a0532230aaf707ad05cecd
Triage Notes for the Maintainers
--cache
flagThe text was updated successfully, but these errors were encountered: