Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation between VolumeSnapshotContent and VolumeSnapshot should be 1:many #273

Closed
ashish-amarnath opened this issue Mar 9, 2020 · 12 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@ashish-amarnath
Copy link

ashish-amarnath commented Mar 9, 2020

User Story

As a user of CSI volumesnapshot API, I would like to be able to use volumesnapshotcontents, with a non-nil Source.snapshotHandle, to provision new volumes pre-populated with data from the same snapshot in different namespaces in a kubernetes cluster.

Detailed Description

For Volumesnapshots with a non-nil .Spec.Source.SnapshotHandle currently, it is only possible to create multiple PVCs, all in the same namespace, using a VolumeSnapshot as DataSource. This is due to the 1:1 relationship constraint between volumesnapshots and volumesnapshotcontents. This prevents users from replicating their workloads in the same cluster across different namespaces.

Proposal

The relationship between volumesnapshotcontents to volumesnapshots should be 1 : many and the volumesnapshotcontents should be allowed to be referenced by multiple volumesnapshots across multiple namespaces.

cc @xing-yang

@nrb
Copy link
Contributor

nrb commented Mar 10, 2020

Would kubernetes/enhancements#1112 kubernetes/enhancements#1555 be relevant here, as an alternative to having a 1:N relationship between VSContents to VolumeSnapshots?

I ask because making the 1:N relationship would break the analog between VSContents/VS and PV/PVC, which makes the snapshot objects a little harder to reason about.

Another suggestion may be to copy the snapshotHandle from one VSContents to another via some tool, so that multiple 1:1 relationships can exist. The problem there, though, is that then you have multiple k8s objects pointing to 1 snapshotHandle, and if you have a deletionPolicy of Delete, the first VSC to be deleted will take the physical snapshot with it.


Do you have a suggestion for what the updated YAML between the VSC and VS would look like in a 1:N mapping?

@ashish-amarnath
Copy link
Author

This is what I imaging the VolumeSnapshotContentSpec type to look like in a 1:N mapping

type VolumeSnapshotContentSpec struct {
         VolumeSnapshotRefs []core_v1.ObjectReference
         Source VolumeSnapshotContentSource
         DeletionPolicy DeletionPolicy
         Driver string
         VolumeSnapshotClassName *string
}

Every volumesnapshot object would add itself to the ref list. This will need some synchronization primitive to protect the ref list and out of band manipulation of the ref list, think editing the spec by hand, can lead to undesirable and unexpected behavior.

@xing-yang xing-yang added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 12, 2020
@xing-yang xing-yang self-assigned this Mar 12, 2020
@nrb
Copy link
Contributor

nrb commented Mar 16, 2020

The other thing you can do is to copy the VolumeSnapshotContent object, with the same snapshotHandle.

If different VSCs have conflicting deletion policies, then one VSC could delete the handle underneath the others, but there's not a whole lot we can do about that.

@xing-yang
Copy link
Collaborator

As discussed in today's Volume Snapshot WG meeting, there is a workaround. While the relationship between VolumeSnapshotContent to VolumeSnapshot is 1:1 mapping, we can create multiple VolumeSnapshotContents referencing the same snapshotHandle. We can have many 1:1 VolumeSnapshotContent to VolumeSnapshot mappings.

@ashish-amarnath can this issue be closed as we have this workaround?

@ashish-amarnath
Copy link
Author

Yes. closing this one.
/close

@k8s-ci-robot
Copy link
Contributor

@ashish-amarnath: Closing this issue.

In response to this:

Yes. closing this one.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@benlangfeld
Copy link

benlangfeld commented Jun 28, 2020

@xing-yang @ashish-amarnath I don't think that workaround properly addresses the use case, and I think this should be re-opened. Conflicting deletion policies are a problem. If there were a 1:many mapping and a Delete deletionPolicy, the deletion of the VolumeSnapshotContent could be prevented until it is not bound to any other VolumeSnapshots, thus preventing proliferation of VolumeSnapshotContent objects with dangling snapshotHandle references. I don't think any workaround which does not address this is sufficient to satisfy the use-case.

@stiller-leser
Copy link

I have to agree with @benlangfeld, I also just stumbled across this issue. We plan to import many existing disk snapshots (in our cloud provider), into Kubernetes using the VolumeSnapshotContent. This works fine as long as we do it in one namespace, but across namespaces it does not.

Sure, creating multiple VolumeSnapshot* touples does work, but would make the handling much harder. Not only because of the deletion issue, but also for cleaning up the different namespaces (as we would be forced to set the deletionPolicy to Retain to prevent any issues). Additionally, this would offer the possibility to distinguish between the snapshots I offer my customers centrally and the snapshots that they create in their namespaces.

Ideally, I would even go a step further and would like to be able to do something like this for the PVC:

spec:
  accessModes:
  - ReadWriteOnce
  dataSource:
    apiGroup: snapshot.storage.k8s.io
    kind: VolumeSnapshot
    name: my-snapshot
    **namespace: my-central-snapshot-management-namespace**
  resources:
    requests:
      storage: 1Gi

The rights of my service account would ensure that I am allowed to access the snapshot storage.

@xing-yang
Copy link
Collaborator

There's on-going design work for namespace transfer for resources such as PVCs and VolumeSnapshots. The following KEP will be updated for this work:
kubernetes/enhancements#1555

@benlangfeld
Copy link

@xing-yang Namespace transfer doesn't solve this use case either.

@stiller-leser
Copy link

stiller-leser commented Mar 30, 2021

So half a year later I "re-found" this issue when, as predicted by @nrb, I ran into an issue with one VolumeSnapshotContent-insert-namespace-here with deletionPolicy: Delete deleted the underlying snapshot (on cloud provider level) for other VolumeSnapshotContent-insert-namespace-here. When handling many namespaces and many snapshots the touple-approach really has its problems.

In our usecase we are adding new namespaces dynamically to which we provision existing snapshots using VolumeSnapshot-insert-namespace-here. In parallel we are creating the respective VolumeSnapshotContent-insert-namespace-here resource "globally".

Our goal is to delete the snapshot on cloud provider side when the last referencing VolumeSnapshotContent-insert-namespace-here is deleted. While we can "automatically" delete the VolumeSnapshot-insert-namespace-here when deleting the namespaces, deleting the VolumeSnapshotContent-insert-namespace-here in an orderly fashion is much harder.

We can't simply set the deletionPolicy to Delete whenever a VSC is deleted, because there might be others around. In our case we have a wrapper object around the VSC so that we can set the deletionPolicy to Delete on all VSCs that we want to delete and then delete them, but it would be way easier to only have one VSC to address.

In fact, we are likely to create a "global" VSC. It itself would not be bound to any VolumeSnapshot. We would only use it for deletion on the underlying snapshot. However, any one of those approaches has the bitter taste of workaround as we are creating many more resources than we would need.

EDIT

While implementing this we realized that we now also need a "global" VolumeSnapshot as any VSC requires a volumeSnapshotRef.

@earthquakesan
Copy link

I found this issue when I was looking into how to import existing snapshots from AWS. If anyone is here for the same reason, the example is located here: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/tree/master/examples/kubernetes/snapshot/manifests/snapshot-import

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

7 participants