You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to backup the pod and its pv, the backup finished very quick and I see this error in the logs:
time="2024-12-04T06:25:38Z" level=info msg="Summary for skipped PVs: [{\"name\":\"pvc-df83e1cd-673c-454d-9943-c48c6b65fbe3\",\"reasons\":[{\"approach\":\"podvolume\",\"reason\":\"opted out due to annotation in pod csisnaps-pod\"},{\"approach\":\"volumeSnapshot\",\"reason\":\"no applicable volumesnapshotter found\"}]}]" backup=velero/rancher-test-default logSource="pkg/backup/backup.go:542"
I really tried all different sorts of things.
This is my snapotclass. I even added a label.
`kubectl get volumesnapshotclass -o yaml
apiVersion: v1
items:
apiVersion: snapshot.storage.k8s.io/v1
deletionPolicy: Delete
driver: csi.vsphere.vmware.com
kind: VolumeSnapshotClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"snapshot.storage.k8s.io/v1","deletionPolicy":"Delete","driver":"csi.vsphere.vmware.com","kind":"VolumeSnapshotClass","metadata":{"annotations":{},"labels":{"velero.io/csi-volumesnapshot-class":"true"},"name":"block-snapshotclass"}}
creationTimestamp: "2024-12-03T15:49:54Z"
generation: 1
labels:
velero.io/csi-volumesnapshot-class: "true"
name: block-snapshotclass
resourceVersion: "20557"
uid: 2ae336ab-3469-411d-ba05-fe1c81f9f718
kind: List
metadata:
resourceVersion: ""
`
I also added the OPT-IN for the pvc:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csisnaps-pvc-vsan-claim annotations: backup.velero.io/backup-volumes: "true" spec: storageClassName: vsphere-csi-sc accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Either I'm missing something or volume snapshots are not supported in my set up.
Please help.
The text was updated successfully, but these errors were encountered:
CSI snapshot is replacing native volume snapshotter, so for vSphere, only CSI snapshot is supported.
Additionally, you are recommended to use CSI snapshot+data mover for vSphere env, there is a limit of snapshots preserved locally for each volume (3 for vSphere 7.x) and the VM's performance will be affected if you preserve the local snapshots for long term.
Thanks for providing the detailed information, and sorry for the confusion and inconvenience.
First, if you are a vSphere customer, it's better to ask for help from VCF customer support.
Second, to your issue, we need to know the content of the PV. I mean the YAML of the PV, e.g. kubectl get pv "pvc-name" -o yaml.
It seems you just installed the CSI driver in the vSphere environment to support the Volume and the snapshot function.
It's possible there were still some volumes created before enabling the CSI.
The CSI drivers cannot handle the legacy volume in the CSI way.
For Velero, it fell back to the default method of snapshotting, which is the Velero-Native snapshot. That is a plugin provided by the cloud provider. The plugin is supposed to directly talk to the cloud provider's snapshot API, for example, Velero has the AWS, Azure , and GCP plugins, but the vSphere plugin doesn't work that way.
As a result, the native snapshot failed too.
If there is exactly what happened in your environment, I suggest to use Restic/Kopia to backup the legacy volumes. The new CSI-compatible volumes can be backed up by the CSI driver way.
Discussed in #8481
Originally posted by r0k5t4r December 4, 2024
Hi all,
I have the following set up:
vSphere 7
VMware ESXi 7.0 Update 3q
Rancher 2.9.2
K8s v1.28.10+rke2r1
Velero 1.15 (Helm)
Helm values:
`configuration:
backupStorageLocation:
config:
checksumAlgorithm: ""
region: default
s3ForcePathStyle: true
s3Url: https://cephrgw.local
name: default
provider: aws
defaultVolumesToFsBackup: false
uploaderType: kopia
volumeSnapshotLocation:
config:
region: default
name: default
provider: aws
credentials:
secretContents:
cloud: |
[default]
aws_access_key_id = asdasdasdasdds
aws_secret_access_key = asdasdasdasdasdsd
deployNodeAgent: true
features: EnableCSI
initContainers:
imagePullPolicy: Always
name: velero-plugin-for-aws
volumeMounts:
name: plugins
snapshotsEnabled: true
`
Storage Class:
kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE csisnaps-sc (default) csi.vsphere.vmware.com Delete Immediate true 14h vsphere-csi-sc (default) csi.vsphere.vmware.com Delete Immediate true 15h
Restic / Kopia works fine. But shouldn't PV snapshots also work with this set up?
I followed the steps from this blogpost and I can successfully create and restore volume snapshots:
https://cormachogan.com/2022/03/03/announcing-vsphere-csi-driver-v2-5-support-for-csi-snapshots/
When I try to backup the pod and its pv, the backup finished very quick and I see this error in the logs:
time="2024-12-04T06:25:38Z" level=info msg="Summary for skipped PVs: [{\"name\":\"pvc-df83e1cd-673c-454d-9943-c48c6b65fbe3\",\"reasons\":[{\"approach\":\"podvolume\",\"reason\":\"opted out due to annotation in pod csisnaps-pod\"},{\"approach\":\"volumeSnapshot\",\"reason\":\"no applicable volumesnapshotter found\"}]}]" backup=velero/rancher-test-default logSource="pkg/backup/backup.go:542"
I really tried all different sorts of things.
This is my snapotclass. I even added a label.
`kubectl get volumesnapshotclass -o yaml
apiVersion: v1
items:
deletionPolicy: Delete
driver: csi.vsphere.vmware.com
kind: VolumeSnapshotClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"snapshot.storage.k8s.io/v1","deletionPolicy":"Delete","driver":"csi.vsphere.vmware.com","kind":"VolumeSnapshotClass","metadata":{"annotations":{},"labels":{"velero.io/csi-volumesnapshot-class":"true"},"name":"block-snapshotclass"}}
creationTimestamp: "2024-12-03T15:49:54Z"
generation: 1
labels:
velero.io/csi-volumesnapshot-class: "true"
name: block-snapshotclass
resourceVersion: "20557"
uid: 2ae336ab-3469-411d-ba05-fe1c81f9f718
kind: List
metadata:
resourceVersion: ""
`
I also added the OPT-IN for the pvc:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csisnaps-pvc-vsan-claim annotations: backup.velero.io/backup-volumes: "true" spec: storageClassName: vsphere-csi-sc accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Either I'm missing something or volume snapshots are not supported in my set up.
Please help.
The text was updated successfully, but these errors were encountered: