Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync doesn't work with Helm: Skipping deploy due to sync error: copying files: didn't sync any files #2478

Closed
kasvtv opened this issue Jul 16, 2019 · 19 comments

Comments

@kasvtv
Copy link

kasvtv commented Jul 16, 2019

Expected behavior

File sync works like it did before I configured Skaffold to use Helm

Actual behavior

File sync fails to copy files:

Syncing 1 files for kasvtv/server:10181686545ee20eab65f02c6d868538ac72855838b829575f33e514fea87044
time="2019-07-16T03:43:26+02:00" level=info msg="Copying files: map[server\\index.js:[/app/index.js]] to kasvtv/server:10181686545ee20eab65f02c6d868538ac72855838b829575f33e514fea87044"
time="2019-07-16T03:43:26+02:00" level=warning msg="Skipping deploy due to sync error: copying files: didn't sync any files"

What's also peculiar is the following error messages, since all of these resources are actually available in the default namespace:

time="2019-07-16T03:43:16+02:00" level=warning msg="error adding label to runtime object: patching resource /postgres-pv: the server could not find the requested resource"
time="2019-07-16T03:43:17+02:00" level=warning msg="error adding label to runtime object: patching resource /redis-pv: the server could not find the requested resource"
time="2019-07-16T03:43:18+02:00" level=warning msg="error adding label to runtime object: patching resource /skaffold-nginx-ingress: the server could not find the requested resource"
time="2019-07-16T03:43:19+02:00" level=warning msg="error adding label to runtime object: patching resource /skaffold-nginx-ingress: the server could not find the requested resource"
Full log (after building the images):
time="2019-07-16T03:43:12+02:00" level=info msg="Building helm dependencies..."
time="2019-07-16T03:43:12+02:00" level=debug msg="Running command: [helm --kube-context docker-desktop dep build k8s]"
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
        Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connectex: No connection could be made because the target machine actively refused it.
Update Complete.
Saving 1 charts
Downloading nginx-ingress from repo https://kubernetes-charts.storage.googleapis.com
Deleting outdated charts
time="2019-07-16T03:43:14+02:00" level=debug msg="Running command: [helm --kube-context docker-desktop upgrade skaffold --force k8s -f k8s/values.dev.yaml]"
Release "skaffold" has been upgraded.
LAST DEPLOYED: Tue Jul 16 03:43:14 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME             DATA  AGE
postgres-config  4     7m58s
redis-config     1     7m58s

==> v1/Deployment
NAME                 READY  UP-TO-DATE  AVAILABLE  AGE
client-deployment    3/3    3           3          7m58s
postgres-deployment  1/1    1           1          7m58s
redis-deployment     1/1    1           1          7m58s
server-deployment    3/3    3           3          7m58s
worker-deployment    1/1    1           1          7m58s

==> v1/PersistentVolume
NAME         CAPACITY  ACCESS MODES  RECLAIM POLICY  STATUS  CLAIM                 STORAGECLASS  REASON  AGE
postgres-pv  2Gi       RWO           Retain          Bound   default/postgres-pvc  postgres      7m58s
redis-pv     2Gi       RWO           Retain          Bound   default/redis-pvc     redis         7m58s

==> v1/PersistentVolumeClaim
NAME          STATUS  VOLUME       CAPACITY  ACCESS MODES  STORAGECLASS  AGE
postgres-pvc  Bound   postgres-pv  2Gi       RWO           postgres      7m58s
redis-pvc     Bound   redis-pv     2Gi       RWO           redis         7m58s

==> v1/Pod(related)
NAME                                                     READY  STATUS   RESTARTS  AGE
client-deployment-669c7c7bc4-vlshf                       1/1    Running  0         7m58s
client-deployment-669c7c7bc4-vwgcp                       1/1    Running  0         7m58s
client-deployment-669c7c7bc4-xs78n                       1/1    Running  0         7m58s
postgres-deployment-5fd9c87d44-45qz8                     1/1    Running  0         7m58s
redis-deployment-6489948445-w28rb                        1/1    Running  0         7m58s
server-deployment-7bfb997cc4-4pmqs                       1/1    Running  0         7m58s
server-deployment-7bfb997cc4-5p7n9                       1/1    Running  0         7m58s
server-deployment-7bfb997cc4-cwps6                       1/1    Running  0         7m58s
skaffold-nginx-ingress-controller-7874fb4b7d-dvwnb       1/1    Running  0         7m58s
skaffold-nginx-ingress-default-backend-6c5b966875-w29vm  1/1    Running  0         7m58s
worker-deployment-85dccf5c4f-qx8kt                       1/1    Running  0         7m58s

==> v1/Service
NAME                                    TYPE          CLUSTER-IP      EXTERNAL-IP  PORT(S)                     AGE
client-cluster-ip                       ClusterIP     10.108.20.164   <none>       3000/TCP                    7m58s
postgres-cluster-ip                     ClusterIP     10.98.53.214    <none>       5432/TCP                    7m58s
redis-cluster-ip                        ClusterIP     10.97.251.145   <none>       6379/TCP                    7m58s
server-cluster-ip                       ClusterIP     10.97.225.165   <none>       3000/TCP                    7m58s
skaffold-nginx-ingress-controller       LoadBalancer  10.104.139.203  localhost    80:31178/TCP,443:32005/TCP  7m58s
skaffold-nginx-ingress-default-backend  ClusterIP     10.103.125.246  <none>       80/TCP                      7m58s

==> v1/ServiceAccount
NAME                    SECRETS  AGE
skaffold-nginx-ingress  1        7m58s

==> v1beta1/ClusterRole
NAME                    AGE
skaffold-nginx-ingress  7m58s

==> v1beta1/ClusterRoleBinding
NAME                    AGE
skaffold-nginx-ingress  7m58s

==> v1beta1/Deployment
NAME                                    READY  UP-TO-DATE  AVAILABLE  AGE
skaffold-nginx-ingress-controller       1/1    1           1          7m58s
skaffold-nginx-ingress-default-backend  1/1    1           1          7m58s

==> v1beta1/Ingress
NAME     HOSTS  ADDRESS  PORTS  AGE
ingress  *      80       7m58s

==> v1beta1/Role
NAME                    AGE
skaffold-nginx-ingress  7m58s

==> v1beta1/RoleBinding
NAME                    AGE
skaffold-nginx-ingress  7m58s


time="2019-07-16T03:43:15+02:00" level=debug msg="Running command: [helm --kube-context docker-desktop get skaffold]"
time="2019-07-16T03:43:15+02:00" level=info msg="error decoding parsed yaml: Object 'Kind' is missing in 'REVISION: 3\nRELEASED: Tue Jul 16 03:43:14 2019\nCHART: k8s-0.1.0\nUSER-SUPPLIED VALUES:\nenv: dev\n\nCOMPUTED VALUES:\nenv: dev\nnginx-ingress:\n  controller:\n    affinity: {}\n    autoscaling:\n      enabled: false\n      maxReplicas: 11\n      minReplicas: 1\n      targetCPUUtilizationPercentage: 50\n      targetMemoryUtilizationPercentage: 50\n    config: {}\n    containerPort:\n      http: 80\n      https:
443\n    customTemplate:\n      configMapKey: \"\"\n      configMapName: \"\"\n    daemonset:\n      hostPorts:\n        http: 80\n        https: 443\n        stats: 18080\n      useHostPort: false\n    defaultBackendService: \"\"\n    dnsPolicy: ClusterFirst\n    electionID: ingress-controller-leader\n    extraArgs: {}\n    extraContainers: []\n    extraEnvs: []\n    extraInitContainers: []\n    extraVolumeMounts: []\n    extraVolumes: []\n    headers: {}\n    hostNetwork: false\n    image:\n      pullPolicy: IfNotPresent\n      repository: quay.io/kubernetes-ingress-controller/nginx-ingress-controller\n      runAsUser: 33\n      tag: 0.25.0\n    ingressClass: nginx\n    kind: Deployment\n    lifecycle: {}\n    livenessProbe:\n      failureThreshold: 3\n      initialDelaySeconds: 10\n      periodSeconds: 10\n      port: 10254\n      successThreshold: 1\n      timeoutSeconds: 1\n    metrics:\n      enabled: false\n      service:\n        annotations: {}\n        clusterIP: \"\"\n        externalIPs: []\n        loadBalancerIP: \"\"\n        loadBalancerSourceRanges: []\n        omitClusterIP: false\n        servicePort: 9913\n        type: ClusterIP\n      serviceMonitor:\n        additionalLabels: {}\n        enabled: false\n        namespace: \"\"\n    minAvailable: 1\n    minReadySeconds: 0\n    name: controller\n    nodeSelector: {}\n    podAnnotations: {}\n    podLabels: {}\n    podSecurityContext: {}\n    priorityClassName: \"\"\n    publishService:\n      enabled: false\n      pathOverride: \"\"\n    readinessProbe:\n      failureThreshold: 3\n      initialDelaySeconds: 10\n      periodSeconds: 10\n      port: 10254\n      successThreshold: 1\n      timeoutSeconds: 1\n    replicaCount: 1\n    reportNodeInternalIp: false\n    resources: {}\n    scope:\n      enabled: false\n      namespace: \"\"\n    service:\n      annotations: {}\n      clusterIP: \"\"\n      enableHttp: true\n      enableHttps: true\n      externalIPs: []\n      externalTrafficPolicy: \"\"\n      healthCheckNodePort: 0\n      labels: {}\n      loadBalancerIP: \"\"\n      loadBalancerSourceRanges: []\n      nodePorts:\n        http: \"\"\n        https: \"\"\n        tcp: {}\n        udp: {}\n      omitClusterIP: false\n      ports:\n        http: 80\n        https: 443\n      targetPorts:\n        http: http\n        https: https\n
     type: LoadBalancer\n    stats:\n      enabled: false\n      service:\n        annotations: {}\n        clusterIP: \"\"\n        externalIPs: []\n        loadBalancerIP: \"\"\n        loadBalancerSourceRanges: []\n        omitClusterIP: false\n        servicePort: 18080\n        type: ClusterIP\n    tolerations: []\n    updateStrategy: {}\n  defaultBackend:\n    affinity: {}\n    enabled: true\n    extraArgs: {}\n    image:\n      pullPolicy: IfNotPresent\n      repository: k8s.gcr.io/defaultbackend-amd64\n
runAsUser: 65534\n      tag: \"1.5\"\n    livenessProbe:\n      failureThreshold: 3\n      initialDelaySeconds: 30\n      periodSeconds: 10\n      successThreshold: 1\n      timeoutSeconds: 5\n    minAvailable: 1\n    name: default-backend\n    nodeSelector: {}\n    podAnnotations: {}\n    podLabels: {}\n    podSecurityContext: {}\n    port: 8080\n    priorityClassName: \"\"\n    readinessProbe:\n      failureThreshold: 6\n      initialDelaySeconds: 0\n      periodSeconds: 5\n      successThreshold: 1\n      timeoutSeconds: 5\n    replicaCount: 1\n    resources: {}\n    service:\n      annotations: {}\n      clusterIP: \"\"\n      externalIPs: []\n      loadBalancerIP: \"\"\n      loadBalancerSourceRanges: []\n      omitClusterIP: false\n      servicePort: 80\n      type:
ClusterIP\n    tolerations: []\n  global: {}\n  imagePullSecrets: []\n  podSecurityPolicy:\n    enabled: false\n  rbac:\n    create: true\n  revisionHistoryLimit: 10\n  serviceAccount:\n    create: true\n  tcp: {}\n  udp: {}\n\nHOOKS:\nMANIFEST:\n\n'"
time="2019-07-16T03:43:15+02:00" level=debug msg="Patching postgres-config in namespace default"
time="2019-07-16T03:43:15+02:00" level=debug msg="Patching redis-config in namespace default"
time="2019-07-16T03:43:15+02:00" level=debug msg="Patching postgres-pv in namespace default"
time="2019-07-16T03:43:15+02:00" level=debug msg="Patching postgres-pv in namespace default"
time="2019-07-16T03:43:16+02:00" level=debug msg="Patching postgres-pv in namespace default"
time="2019-07-16T03:43:16+02:00" level=warning msg="error adding label to runtime object: patching resource /postgres-pv: the server could not find the requested resource"
time="2019-07-16T03:43:16+02:00" level=debug msg="Patching redis-pv in namespace default"
time="2019-07-16T03:43:16+02:00" level=debug msg="Patching redis-pv in namespace default"
time="2019-07-16T03:43:17+02:00" level=debug msg="Patching redis-pv in namespace default"
time="2019-07-16T03:43:17+02:00" level=warning msg="error adding label to runtime object: patching resource /redis-pv: the server could not find the requested resource"
time="2019-07-16T03:43:17+02:00" level=debug msg="Patching postgres-pvc in namespace default"
time="2019-07-16T03:43:17+02:00" level=debug msg="Patching redis-pvc in namespace default"
time="2019-07-16T03:43:17+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:17+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:17+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:18+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:18+02:00" level=warning msg="error adding label to runtime object: patching resource /skaffold-nginx-ingress: the server could not find the requested resource"
time="2019-07-16T03:43:18+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:18+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:18+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:19+02:00" level=warning msg="error adding label to runtime object: patching resource /skaffold-nginx-ingress: the server could not find the requested resource"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching skaffold-nginx-ingress in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Labels are not applied to service [skaffold-nginx-ingress-controller] because of issue: https://github.com/GoogleContainerTools/skaffold/issues/887"
time="2019-07-16T03:43:19+02:00" level=debug msg="Labels are not applied to service [skaffold-nginx-ingress-default-backend] because of issue: https://github.com/GoogleContainerTools/skaffold/issues/887"
time="2019-07-16T03:43:19+02:00" level=debug msg="Labels are not applied to service [client-cluster-ip] because of issue: https://github.com/GoogleContainerTools/skaffold/issues/887"
time="2019-07-16T03:43:19+02:00" level=debug msg="Labels are not applied to service [postgres-cluster-ip] because of issue: https://github.com/GoogleContainerTools/skaffold/issues/887"
time="2019-07-16T03:43:19+02:00" level=debug msg="Labels are not applied to service [redis-cluster-ip] because of issue: https://github.com/GoogleContainerTools/skaffold/issues/887"
time="2019-07-16T03:43:19+02:00" level=debug msg="Labels are not applied to service [server-cluster-ip] because of issue: https://github.com/GoogleContainerTools/skaffold/issues/887"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching skaffold-nginx-ingress-controller in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching skaffold-nginx-ingress-default-backend in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching client-deployment in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching postgres-deployment in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching redis-deployment in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching server-deployment in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching worker-deployment in namespace default"
time="2019-07-16T03:43:19+02:00" level=debug msg="Patching ingress in namespace default"
Deploy complete in 7.1830562s
Waiting for deployments to stabilize
Watching for changes every 1s...
time="2019-07-16T03:43:20+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:20+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:20+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:20+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:20+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:20+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:21+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:21+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:21+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:21+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:21+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:21+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:22+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:22+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:22+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:22+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:22+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:22+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:23+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:23+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:23+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:23+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:23+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:23+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:24+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:24+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:24+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:24+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:24+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:24+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:25+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:25+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:25+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:25+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:25+02:00" level=info msg="files modified: [server\\index.js]"
time="2019-07-16T03:43:25+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:25+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:26+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:26+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:26+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:26+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:26+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:26+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
Syncing 1 files for kasvtv/server:10181686545ee20eab65f02c6d868538ac72855838b829575f33e514fea87044
time="2019-07-16T03:43:26+02:00" level=info msg="Copying files: map[server\\index.js:[/app/index.js]] to kasvtv/server:10181686545ee20eab65f02c6d868538ac72855838b829575f33e514fea87044"
time="2019-07-16T03:43:26+02:00" level=warning msg="Skipping deploy due to sync error: copying files: didn't sync any files"
Watching for changes every 1s...
time="2019-07-16T03:43:27+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:27+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:27+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:27+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:27+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:27+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:28+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:28+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:28+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:28+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:28+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:28+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:29+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:29+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:29+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:29+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:29+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:29+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:30+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:30+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:30+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:30+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
time="2019-07-16T03:43:30+02:00" level=debug msg="Checking base image node:10.16.0-alpine for ONBUILD triggers."
time="2019-07-16T03:43:30+02:00" level=debug msg="Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]"
Pruning images...
`
### Information

- Skaffold version: 0.33.0
- Docker version: Docker Desktop 2.0.5.0 (35318) running Kubernetes 14.3.0
- Operating system: W10 Pro
- Contents of skaffold.yaml:

```yaml
apiVersion: skaffold/v1beta2
kind: Config
build:
  local:
    push: false
  artifacts:
    - image: kasvtv/client
      context: client
      docker:
        dockerfile: dev.Dockerfile
      sync:
        '**/*.js': .
        '**/*.css': .
        '**/*.html': .
    - image: kasvtv/server
      context: server
      docker:
        dockerfile: dev.Dockerfile
      sync:
        '**/*.js': .
    - image: kasvtv/worker
      context: worker
      docker:
        dockerfile: dev.Dockerfile
      sync:
        '**/*.js': .
deploy:
  helm:
    releases:
      - name: k8s
        chartPath: k8s
        valuesFiles:
          - k8s/values.dev.yaml

Information

  • Skaffold version: 0.33.0
  • Operating system: W10 Pro running Docker Desktop 2.0.5.0 (Kubernetes 14.3.0)
  • Contents of skaffold.yaml: see repo

Steps to reproduce the behavior

  1. clone https://github.com/kasvtv/skaffold-helm-issue
  2. run: skaffold dev
  3. update a JS file in client, worker or server (results in an error on master branch, works in no-helm branch)
@dgageot dgageot added area/deploy deploy/helm kind/bug Something isn't working labels Jul 16, 2019
@kasvtv
Copy link
Author

kasvtv commented Jul 16, 2019

I found out that right here:
https://github.com/GoogleContainerTools/skaffold/blob/master/pkg/skaffold/sync/sync.go#L179

When using Helm, this expression is never true, because c.Image is the image that doesn't contain the tag, i.e.:

image: kasvtv/server:de71e764ca967794947909b82a9b6d4b11122846f57c5cf30e4e2189fb8364d9 kasvtv/server
c.Image: kasvtv/server

When not using helm, both values contain the tag and thus are equal when the correct pod is evaluated:

kasvtv/server:7bb32bff617a2ebfe5b171d3f47f0a47870b7369829e9c5cdb84248cfcd10efc

@kasvtv
Copy link
Author

kasvtv commented Jul 16, 2019

#2111 did not fix this for me by the way.
Furthermore I believe it would be unwise to take out the tag part, since that would lead to issues when one uses multiple images from the same repo, but with different tags

@andreassiegel
Copy link

I'm experiencing the same issue with the latest version of skaffold (v0.36.0) on Mac with Docker for Mac when I use helm for the deployment.

The file change is detected, but the sync fails with the same warning:

WARN[1393] Skipping deploy due to sync error: copying files: didn't sync any files

@dgageot
Copy link
Contributor

dgageot commented Aug 30, 2019

Hi @kasvtv and @andreassiegel, I wonder if there's any way you can test #2772 and see if solves your issue.

@dgageot dgageot self-assigned this Aug 30, 2019
@andreassiegel
Copy link

andreassiegel commented Aug 30, 2019

Hi @kasvtv and @andreassiegel, I wonder if there's any way you can test #2772 and see if solves your issue.

I checked out that version and tried to run my scenario with it. Unfortunately, the behavior is still the same:

INFO[0111] files modified: [...]
Syncing 39 files for my-service:bdf139597ca932529d0d191a80bf6e3a5fff064e98a23a640b5abe59cb72cd94
INFO[0111] Copying files: ...
WARN[0111] Skipping deploy due to sync error: copying files: didn't sync any files
Watching for changes...

I wonder if that's related to the behavior of the sha256 tagger that always generates latest as the tag:

DEBU[0000] setting Docker user agent to skaffold-v0.1.0-4035-gef8cecfe
DEBU[0000] push value not present, defaulting to false because localCluster is true
Listing files to watch...
 - my-service
List generated in 251.436486ms
Generating tags...
 - my-service -> my-service:latest
Tags generated in 104.379µs
Checking cache...
 - my-service: Not found. Building
Cache check complete in 2.219984416s
Starting build...
Found [docker-desktop] context, using local docker daemon.
Building [my-service]...

Building my-service:latest...
...

The log output from above clearly indicates that there is also another tag in play, but I don't know how I get skaffold to use a tag like that for the build.

I just remember that I had the file sync working when I used manifest files instead of helm charts, but that's already a while ago. Therefore I'm not sure which piece in my setup is the actual problem.

@andreassiegel
Copy link

I also double-checked it with the example provided by @kasvtv.

This example doesn't set an explicit tag policy, but seems to be using gitCommit:

DEBU[0000] setting Docker user agent to skaffold-v0.1.0-4035-gef8cecfe
Listing files to watch...
 - kasvtv/client
DEBU[0002] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
 - kasvtv/server
DEBU[0002] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
 - kasvtv/worker
DEBU[0002] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
List generated in 2.74486392s
Generating tags...
 - kasvtv/client -> DEBU[0002] Running command: [git describe --tags --always]
DEBU[0002] Running command: [git describe --tags --always]
DEBU[0002] Running command: [git describe --tags --always]
DEBU[0002] Command output: [110200e
]
DEBU[0002] Command output: [110200e
]
DEBU[0002] Command output: [110200e
]
DEBU[0002] Running command: [git status . --porcelain]
DEBU[0002] Running command: [git status . --porcelain]
DEBU[0002] Running command: [git status . --porcelain]
DEBU[0002] Command output: []
DEBU[0002] Command output: []
kasvtv/client:110200e
 - kasvtv/server -> kasvtv/server:110200e
 - kasvtv/worker -> DEBU[0002] Command output: []
kasvtv/worker:110200e
Tags generated in 40.345608ms
Checking cache...
DEBU[0002] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
DEBU[0002] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
DEBU[0002] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
 - kasvtv/client: Not found. Building
 - kasvtv/server: Not found. Building
 - kasvtv/worker: Not found. Building
Cache check complete in 4.609235ms
Starting build...
Found [docker-desktop] context, using local docker daemon.
Building [kasvtv/client]...

After a file was updated, the change is detected, but the sync also fails:

INFO[0160] files modified: [server/index.js]
DEBU[0160] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
Syncing 1 files for kasvtv/server:3896e6bae9a4b214c4544b78801631cd4a61852e92303063199b5abeb964d9fe
INFO[0160] Copying files: map[server/index.js:[/app/index.js]] to kasvtv/server:3896e6bae9a4b214c4544b78801631cd4a61852e92303063199b5abeb964d9fe
INFO[0160] Skipping sync with pod client-deployment-669c7c7bc4-42qtq because it's not running
INFO[0160] Skipping sync with pod client-deployment-669c7c7bc4-gvjtg because it's not running
INFO[0160] Skipping sync with pod postgres-deployment-5fd9c87d44-6wjwg because it's not running
INFO[0160] Skipping sync with pod redis-deployment-6489948445-2zr6q because it's not running
INFO[0160] Skipping sync with pod server-deployment-7bfb997cc4-b5j6m because it's not running
INFO[0160] Skipping sync with pod server-deployment-7bfb997cc4-stkcd because it's not running
INFO[0160] Skipping sync with pod server-deployment-7bfb997cc4-tnksp because it's not running
INFO[0160] Skipping sync with pod worker-deployment-85dccf5c4f-zxzc6 because it's not running
INFO[0160] Skipping sync with pod client-deployment-669c7c7bc4-42qtq because it's not running
INFO[0160] Skipping sync with pod client-deployment-669c7c7bc4-gvjtg because it's not running
INFO[0160] Skipping sync with pod postgres-deployment-5fd9c87d44-6wjwg because it's not running
INFO[0160] Skipping sync with pod redis-deployment-6489948445-2zr6q because it's not running
INFO[0160] Skipping sync with pod server-deployment-7bfb997cc4-b5j6m because it's not running
INFO[0160] Skipping sync with pod server-deployment-7bfb997cc4-stkcd because it's not running
INFO[0160] Skipping sync with pod server-deployment-7bfb997cc4-tnksp because it's not running
INFO[0160] Skipping sync with pod worker-deployment-85dccf5c4f-zxzc6 because it's not running
WARN[0160] Skipping deploy due to sync error: copying files: didn't sync any files
Watching for changes...

Note that this time the sync fails because pods are not running.

So I checked the running pods, and saw them running just fine:

$ kubectl get po
NAME                                                 READY   STATUS    RESTARTS   AGE
client-deployment-669c7c7bc4-42qtq                   1/1     Running   0          12m
client-deployment-669c7c7bc4-gvjtg                   1/1     Running   0          12m
client-deployment-669c7c7bc4-rxbg8                   1/1     Running   0          12m
k8s-nginx-ingress-controller-cb6bd85d8-wvwkh         1/1     Running   0          12m
k8s-nginx-ingress-default-backend-57bbbc5c89-r9c6z   1/1     Running   0          12m
postgres-deployment-5fd9c87d44-6wjwg                 1/1     Running   0          12m
redis-deployment-6489948445-2zr6q                    1/1     Running   0          12m
server-deployment-7bfb997cc4-b5j6m                   1/1     Running   0          12m
server-deployment-7bfb997cc4-stkcd                   1/1     Running   0          12m
server-deployment-7bfb997cc4-tnksp                   1/1     Running   0          12m
worker-deployment-85dccf5c4f-zxzc6                   1/1     Running   0          12m

Then I wanted to check if the sync probably failed because a pod was restarting, and updated the file again:

INFO[0801] files modified: [server/index.js]
DEBU[0801] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
Syncing 1 files for kasvtv/server:3896e6bae9a4b214c4544b78801631cd4a61852e92303063199b5abeb964d9fe
INFO[0801] Copying files: map[server/index.js:[/app/index.js]] to kasvtv/server:3896e6bae9a4b214c4544b78801631cd4a61852e92303063199b5abeb964d9fe
WARN[0801] Skipping deploy due to sync error: copying files: didn't sync any files
Watching for changes...

This time there isn't any additional log output, just the warning about the error.

@dgageot
Copy link
Contributor

dgageot commented Aug 30, 2019

:-( Let me take some time to read your comments

@dgageot
Copy link
Contributor

dgageot commented Aug 30, 2019

OK, the first issue I see is that none of the Kubernetes manifests use templating to replace the image names. This means that the yams files are applied as is.
Those lines should use a variable for the image name:

Something like here

Without that, nothing will work.

@andreassiegel
Copy link

This could indeed be related to my problem as well: Even though my helm chart is using a variable for the image, I don't alter it in skaffold.yaml.

If it doesn't need to be overridden using some additional templating that's at least not a problem in my case.
Let me update the other example to see if that changes anything...

@andreassiegel
Copy link

andreassiegel commented Aug 30, 2019

Well, that probably can't be it, I guess...

At least there is now a set or variables in place, I updated the deployments using something like this: image: {{ .Values.images.worker }}, so that the values.yaml now includes these additional values:

images:
  client: kasvtv/client
  server: kasvtv/server
  worker: kasvtv/worker

However, unless skaffold does some analysis it probably doesn't know what variables are actually used for the image name.

Apologies if that thought might be sort of stupid, I'm absolutely not into the details of skaffold internals. 🙃

The outcome regarding file sync is still the same:

INFO[0039] files modified: [server/index.js]
DEBU[0039] Found dependencies for dockerfile: [{package-lock.json /app true} {package.json /app true} {. /app true}]
Syncing 1 files for kasvtv/server:f8f0d4a078c33160b646c541c2eb7c60d57b5d3f8cc0549229b5a07989ee00ee
INFO[0039] Copying files: map[server/index.js:[/app/index.js]] to kasvtv/server:f8f0d4a078c33160b646c541c2eb7c60d57b5d3f8cc0549229b5a07989ee00ee
WARN[0040] Skipping deploy due to sync error: copying files: didn't sync any files
Watching for changes...

@dgageot
Copy link
Contributor

dgageot commented Aug 30, 2019

@andreassiegel no worries. I don't know anything about helm myself.

Instead of adding those lines to values.yaml, could you use this syntax: https://github.com/GoogleContainerTools/skaffold/blob/master/examples/helm-deployment/skaffold.yaml#L16

with something like:

values:
  imageclient: kasvtv/client
  imageserver: kasvtv/server
  imageworker: kasvtv/worker

@dgageot
Copy link
Contributor

dgageot commented Aug 30, 2019

I pushed more code to properly apply labels to all deployed resources

@andreassiegel
Copy link

andreassiegel commented Sep 2, 2019

Thanks a lot @dgageot! It is working with the latest version from master now.

However, in my setup some additional changes were required to make it work in the end. I'll list them here just in case they're helpful for someone else as well:

BTW: I ended up with a Docker build argument to set the user because in actual production deployments I certainly don't want to use a root user.

@tejal29
Copy link
Member

tejal29 commented Sep 24, 2019

@andreassiegel looks like you were able to get this work.
I am going to close this now. Please re-open if you need to address anything more.

@tejal29 tejal29 closed this as completed Sep 24, 2019
@igoooor
Copy link

igoooor commented Oct 25, 2019

is it released already? because I'm still facing this issue, using skaffold 0.40.0

@andreassiegel
Copy link

is it released already? because I'm still facing this issue, using skaffold 0.40.0

Yes, it is released and working fine. If I remember it correctly, the fix was included in v0.38.0.

@igoooor
Copy link

igoooor commented Oct 25, 2019

@andreassiegel do you have recommendations to follow to make it work? I saw your previous message, but I was not able to reproduce that

@andreassiegel
Copy link

@igoooor Is there any chance you can share your project or provide a minimal sample to reproduce your issue?

@igoooor
Copy link

igoooor commented Oct 28, 2019

I fixed my issue by reading all the messages again, sorry about that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants