You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
with Velero 1.14.0
I'm running a VM on Contabo with RKE2 running v1.27.12+rke2r1 version of K8s. and I'm trying to push the velero based backups to DigitalOcean's S3 Object Storage.
tied to Digitalocean's S3 Object storage using below given command
# velero get backup-locations
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws efvelero Available 2024-07-23 16:31:10 +0200 CEST ReadWrite true
It always fails the backup exactly at 2.37 Gb with 131 items
Just to note:
1.,the same backup works with minio deployment without any errors
2. I have tried to upload a 10GB file to DO s3 bucket and I can confirm there is no resource limit on the DO side. it works for larger files.
What did you expect to happen:
The backup fails after a certain period of time
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help
Attached as bundle-2024-07-23-16-48-25.tar.gz
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero Attached
velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
# velero backup describe ef-mongo
Name: ef-mongo
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/resource-timeout=10m0s
velero.io/source-cluster-k8s-gitversion=v1.27.12+rke2r1
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=27
Phase: Failed (run `velero backup logs ef-mongo` for more information)
Namespaces:
Included: ef-external
Excluded: <none>
Resources:
Included: pods, persistentvolumeclaims, persistentvolumes
Excluded: <none>
Cluster-scoped: auto
Label selector: app.kubernetes.io/name=mongodb
Or label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
Snapshot Move Data: false
Data Mover: velero
TTL: 720h0m0s
CSISnapshotTimeout: 10m0s
ItemOperationTimeout: 4h0m0s
Hooks: <none>
Backup Format Version: 1.1.0
Started: 2024-07-23 16:37:06 +0200 CEST
Completed: <n/a>
Expiration: 2024-08-22 16:37:06 +0200 CEST
Total items to be backed up: 3
Items backed up: 3
Backup Volumes:
Velero-Native Snapshots: <none included>
CSI Snapshots: <none included or not detectable>
Pod Volume Backups - kopia (specify --details for more information):
Completed: 1
HooksAttempted: 0
HooksFailed: 0
velero backup logs <backupname>
velero backup logs ef-mongo
An error occurred: file not found
velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml N/A
velero restore logs <restorename> N/A
Anything else you would like to add:
Environment:
Velero version (use velero version): 1.14.0
Velero features (use velero client config get features):
# velero client config get features
features: <NOT SET>
**Vote on this issue!**
This is an invitation to the Velero community to vote on issues, you can see the project's [top voted issues listed here](https://github.com/vmware-tanzu/velero/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).
Use the "reaction smiley face" up to the right of this comment to vote.
- :+1: for "I would like to see this bug fixed as soon as possible"
- :-1: for "There are more important bugs to focus on right now"
The text was updated successfully, but these errors were encountered:
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 14 days. If a Velero team member has requested log or more information, please provide the output of the shared commands.
What steps did you take and what happened:
with Velero 1.14.0
I'm running a VM on Contabo with RKE2 running v1.27.12+rke2r1 version of K8s. and I'm trying to push the velero based backups to DigitalOcean's S3 Object Storage.
tied to Digitalocean's S3 Object storage using below given command
my backup-location status
list of velero plugins
so far logs
kubectl logs deploy velero .txt
tried to take backup of the mongodb pod ( deployed using helm chart
Annotated the pod using
Created backup for mongo using
So far logs -- still in progress
velero-backup-describe-ef-mongo--details.txt
backup is uploading untill it reaches 2.37 Gb
after 2 minutes
Now the Failure part
backup is failed with
with logs as per
kubectl-logs-deployment-velero-n-velero.txt
It always fails the backup exactly at 2.37 Gb with 131 items
Just to note:
1.,the same backup works with minio deployment without any errors
2. I have tried to upload a 10GB file to DO s3 bucket and I can confirm there is no resource limit on the DO side. it works for larger files.
What did you expect to happen:
The backup fails after a certain period of time
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
Attached as
bundle-2024-07-23-16-48-25.tar.gz
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
Attachedvelero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
N/Avelero restore logs <restorename>
N/AAnything else you would like to add:
Environment:
velero version
): 1.14.0velero client config get features
):kubectl version
):rke2
cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
The text was updated successfully, but these errors were encountered: