Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Helm chart is reporting a missing PodDisruptionBudget when replicas count is not defined #536

Open
alita1991 opened this issue May 19, 2023 · 2 comments

Comments

@alita1991
Copy link

Which version of kube-score are you using?

kube-score version: 1.16.1

What did you do?

I run kube-score against bitnami/sealed-secrets helm chart and I got an issue related to a missing PodDisruptionBudget.
When I looked into the deployment manifest, I noticed that the replicas field is not set, and because of that, the test is failing, which is not expected, because the value will be 1 after the deployment.

if deployment.Spec.Replicas != nil && *deployment.Spec.Replicas < 2 {
			score.Skipped = true
			score.AddComment("", "Skipped because the deployment has less than 2 replicas", "")
			return
		}

What did you expect to see?

Test should be skipped

What did you see instead?

[CRITICAL] Deployment has PodDisruptionBudget
        · No matching PodDisruptionBudget was found
            It's recommended to define a PodDisruptionBudget to avoid
            unexpected downtime during Kubernetes maintenance operations, such
            as when draining a node.
@kmarteaux
Copy link
Contributor

kmarteaux commented Jul 7, 2023

@alita1991 - I was not able to replicate the issue reported. In my development environment - I cloned the bitnami-labs/sealed-secrets repo, checked out the master branch, and using the latest version of kube-score (v1.17.0), ran the following --

$ helm template sealed-secrets sealed-secrets -f sealed-secrets/values.yaml | kube-score score - -vv
2023/07/07 12:48:45 Unknown datatype: /v1, Kind=ServiceAccount
2023/07/07 12:48:45 Unknown datatype: rbac.authorization.k8s.io/v1, Kind=ClusterRole
2023/07/07 12:48:45 Unknown datatype: rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding
2023/07/07 12:48:45 Unknown datatype: rbac.authorization.k8s.io/v1, Kind=Role
2023/07/07 12:48:45 Unknown datatype: rbac.authorization.k8s.io/v1, Kind=Role
2023/07/07 12:48:45 Unknown datatype: rbac.authorization.k8s.io/v1, Kind=RoleBinding
2023/07/07 12:48:45 Unknown datatype: rbac.authorization.k8s.io/v1, Kind=RoleBinding
apps/v1/Deployment sealed-secrets in default                                  💥
    [OK] Stable version
    [OK] Label values
    [CRITICAL] Container Image Pull Policy
        · controller -> ImagePullPolicy is not set to Always
            It's recommended to always set the ImagePullPolicy to Always, to
            make sure that the imagePullSecrets are always correct, and to
            always get the image you want.
    [SKIPPED] Container Ports Check
        · Skipped because container-ports-check is ignored
    [CRITICAL] Container Security Context User Group ID
        · controller -> The container is running with a low user ID
            A userid above 10 000 is recommended to avoid conflicts with the
            host. Set securityContext.runAsUser to a value > 10000
        · controller -> The container running with a low group ID
            A groupid above 10 000 is recommended to avoid conflicts with the
            host. Set securityContext.runAsGroup to a value > 10000
    [OK] Container Security Context Privileged
    [SKIPPED] Container Seccomp Profile
        · Skipped because container-seccomp-profile is ignored
    [OK] Pod Topology Spread Constraints
        · Pod Topology Spread Constraints
            No Pod Topology Spread Constraints set, kube-scheduler defaults
            assumed
    [SKIPPED] Container CPU Requests Equal Limits
        · Skipped because container-cpu-requests-equal-limits is ignored
    [SKIPPED] Container Ephemeral Storage Request Equals Limit
        · Skipped because container-ephemeral-storage-request-equals-limit is ignored
    [SKIPPED] Container Memory Requests Equal Limits
        · Skipped because container-memory-requests-equal-limits is ignored
    [CRITICAL] Pod NetworkPolicy
        · The pod does not have a matching NetworkPolicy
            Create a NetworkPolicy that targets this pod to control who/what
            can communicate with this pod. Note, this feature needs to be
            supported by the CNI implementation used in the Kubernetes cluster
            to have an effect.
    [CRITICAL] Container Resources
        · controller -> CPU limit is not set
            Resource limits are recommended to avoid resource DDOS. Set
            resources.limits.cpu
        · controller -> Memory limit is not set
            Resource limits are recommended to avoid resource DDOS. Set
            resources.limits.memory
        · controller -> CPU request is not set
            Resource requests are recommended to make sure that the application
            can start and run without crashing. Set resources.requests.cpu
        · controller -> Memory request is not set
            Resource requests are recommended to make sure that the application
            can start and run without crashing. Set resources.requests.memory
    [OK] Container Image Tag
    [CRITICAL] Container Ephemeral Storage Request and Limit
        · controller -> Ephemeral Storage limit is not set
            Resource limits are recommended to avoid resource DDOS. Set
            resources.limits.ephemeral-storage
    [OK] Environment Variable Key Duplication
    [CRITICAL] Pod Probes
        · Container has the same readiness and liveness probe
            Using the same probe for liveness and readiness is very likely
            dangerous. Generally it's better to avoid the livenessProbe than
            re-using the readinessProbe.
            More information: https://github.com/zegl/kube-score/blob/master/README_PROBES.md
    [OK] Container Security Context ReadOnlyRootFilesystem
    [SKIPPED] Container Resource Requests Equal Limits
        · Skipped because container-resource-requests-equal-limits is ignored
    [OK] Deployment Pod Selector labels match template metadata labels
    [SKIPPED] Deployment has PodDisruptionBudget
        · Skipped because the deployment has less than 2 replicas
    [SKIPPED] Deployment has host PodAntiAffinity
        · Skipped because the deployment has less than 2 replicas
    [SKIPPED] Deployment targeted by HPA does not have replicas configured
        · Skipped because the deployment is not targeted by a HorizontalPodAutoscaler
v1/Service sealed-secrets in default                                          ✅
    [OK] Stable version
    [OK] Label values
    [OK] Service Targets Pod
    [OK] Service Type

As you can see, the PodDisruptionBudget test is skipped. Would you please retry with the latest kube-score? Additionally, I did note when running helm template the deployment.spec.replicas value was set (set to 1). It was not undefined, as the issue report suggests.

@cayla
Copy link
Contributor

cayla commented Aug 3, 2023

FWIW, I just had a similar issue where kubescore was complaining about missing PDBs even though they very much existed.

In my case, it was failing because I didn't uniformly specify the namespaces on all the resources (using helm where -n [foo] was obscuring the omission).

I opened #549 if its helpful to other people in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants