Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to do dyanmic PVC claim ? #25

Closed
HackToHell opened this issue Mar 8, 2017 · 13 comments
Closed

How to do dyanmic PVC claim ? #25

HackToHell opened this issue Mar 8, 2017 · 13 comments

Comments

@HackToHell
Copy link

Added

  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
          volume.beta.kubernetes.io/storage-class: default
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 8Gi

to the 50kafka.yml file and commented out the 10pvc.yaml file but still getting an error saying

[SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "datadir-kafka-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "datadir-kafka-0", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "datadir-kafka-0", which is unexpected.]

I am unable to solve this particular issue, do you have any idea what I am doing wrong.

@solsson
Copy link
Contributor

solsson commented Mar 8, 2017

What kind of Kubernetes setup are you using? Automatic volume provisioning is a feature of the type of Kubernetes hosting you're using. I've used it on GKE but decided to disable it for our production services because it results in quite obscure volume names.

@HackToHell
Copy link
Author

HackToHell commented Mar 8, 2017

Use GKE right now. I aslo tried creating a storage class and setting that instead of default.

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: slow
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-standard
---

Tried to use that too, get the same error.

@solsson
Copy link
Contributor

solsson commented Mar 8, 2017

I haven't tested this since back when PetSet alpha was available in GKE, but it should work now. Could it be that some remnants of the manual provisioning is preventing the magic from happening? Can you create a PR where I can see the actual diff?

@HackToHell
Copy link
Author

HackToHell commented Mar 8, 2017

Ok let me do that. You can find the diff here HackToHell@b2afaa5

@solsson
Copy link
Contributor

solsson commented Mar 9, 2017

I don't think you should define a PersistentVolumeClaim at all. The one in your fork contains the matchLabels that is needed for with created PVs, and makes no sense with automatic provisioning.

Please see for example: https://github.com/kubernetes/kubernetes/blob/v1.5.4/examples/cockroachdb/cockroachdb-statefulset.yaml#L162 and the lines below that. Other examples include https://github.com/kubernetes/kubernetes/blob/v1.5.4/examples/storage/cassandra/cassandra-statefulset.yaml and https://github.com/kubernetes/kubernetes/blob/v1.6.0-beta.2/test/e2e/testing-manifests/statefulset/mysql-galera/statefulset.yaml.

When it works I'd be happy to have a PR with this, as documentation to others who prefer automatic provisioning.

@HackToHell
Copy link
Author

HackToHell commented Mar 9, 2017

I tried it on a new cluster without the PVC and it worked, didn't work on the old one even after deleting the namespace.
Looks like it was some resource of the same name that was leftover from testing. Weird, kubectl get pv and kubectl get pvc didn't show anything though.

Let me add in details for a PR so that people can also use automatic provisioning.

@abishgj-rzt
Copy link

@HackToHell Any update on this issue ? Even I'm facing the same issue.

@abishgj-rzt
Copy link

abishgj-rzt commented May 10, 2017

@HackToHell @solsson I fixed the issue with a small change.

In persistent volume :

apiVersion: v1 kind: PersistentVolume metadata: name: datadir-kafka-0 labels: app: kafka podindex: "0" spec: accessModes: - ReadWriteOnce capacity: storage: 100Mi hostPath: path: /tmp/k8s-data/datadir-kafka-0

The storage capacity is 100Mi. Where as in the persistent volume claim the storage capacity is 200Gi:

kind: PersistentVolumeClaim apiVersion: v1 metadata: name: datadir-kafka-0 namespace: kafka spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi selector: matchLabels: app: kafka podindex: "0"

Persistent volume is the actual provision of the storage where as persistent volume claim is a claim from the pods for storage. Hence Kubernetes was not able to provision it.

Changing the capacity in both persistent volume and persistent volume claim fixed this.

Reference: https://blog.couchbase.com/stateful-containers-kubernetes-amazon-ebs/ for persistent volume and persistent volume claims

@abishgj-rzt
Copy link

@HackToHell One reason why the command kubectl get pvc not shown could be the namespace.

By default it will point to default namespace, for a different namespace, you have to set in --namepsace option.

Example: kubectl get pods --namespace=kafka

@solsson
Copy link
Contributor

solsson commented Aug 9, 2017

Dynamically provisioned volumes is the default in v2.0.0, released today. There's a gotcha documented in #57 but also a major advantage that it's easy to use with multi-zone clusters.

@solsson solsson closed this as completed Aug 9, 2017
@julienvincent
Copy link

julienvincent commented Sep 4, 2017

I just ran into this same issue, not sure if I'm missing something? Using latest version of kubernetes (1.7.4) offered by GKE and latest version of this repo

@solsson
Copy link
Contributor

solsson commented Sep 5, 2017

@julienvincent I'm quite surprised it doesn't work with GKE. Does it make a difference if you use/merge #50?

#67 is likely related.

@julienvincent
Copy link

@solsson I realized it wasn't quite the same issue - I opened a new issue #67 which I will update with some more info

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants