-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
there is new volme mount node-conf-redis-cluster defined for 0.15.2 version and its not defined in values.yaml #114
Comments
We can do that but I thought that the volume should only store the I think if you want to increase the size you must increase the other volume which stores the data changing this might not be a good idea. What do you think about that? one more thing there should be a default storage class in your k8s to make a 1Mi volume. |
i agree but with some of the private clould providers default is 5gb getting that less space 1Mi might not be possible . any alternatives approaches for this ? |
This is actually a real problem I have seen. I might try node volume bind but don't want to stick to that node, probably this thing would be addressed in the next release. |
I think it is a good practice and it is expected, when you use persistent storage, to be able to define the storage class |
@shubham-cmyk the latest version 0.15.3 still has issues ✦ ❯ helm install redis-cluster ot-helm/redis-cluster |
This is because you have not updated the CRD. |
@shubham-cmyk or @iamabhishek-dubey I am also facing the same issue. So may I know , What are the exact steps for upgrading the crd? In case if we already have a crd in our cluster, does kubectl apply -f newcrdfile.yaml won't override the existing crd ? I mean the old crd? You have to delete the CRD manually then only it would install, |
Yes you have uninstall and install it to prevent the data loss you have to make a backup and restore it. This way you can prevent the data loss. |
@revathyr13 I would write a migration doc I think most of the user are facing it. |
@shubham-cmyk |
We do have some scripts that could make backup to the s3 and restore that. Check the scripts : https://github.com/OT-CONTAINER-KIT/redis-operator/tree/master/scripts There are some other option available for the migration like velero you could check that out also |
Thank you |
Hello @shubham-cmyk I tried the backup scripts from my end. As per my understanding, the backup script creates the rbd snapshots of each master node and uploads to AWS/GCP S3 buckets. In our case it was AWS. This part works fine for me. However, the restore part didn't work. As per the script https://github.com/OT-CONTAINER-KIT/redis-operator/blob/master/scripts/restore/restore.bash it restores the latest rbd snapshot of master pods to the respective master pod right? So please briefly explain the backup /restore process. Also do we need to take rbd snapshots of all the pods in the source cluster [master and slave] and restore them? I migrated the data from redis cluster running on 6 version in operator version 0.10 to the redis version 7 running in operator version 0.15. Not sure do we have any change restore/backup steps depends on redis operator version. |
Yes you are right. |
I have created a new cluster and restored the snapshots from aws directly to the Redis master pods. I think at that time the redis cluster was running. In some docs I noticed that we have to stop the redis service before restoring the dump file. As i couldn't find any method, I just restored without stopping the service. Please share the backup doc so that I can retry it with the help of it. Thank you |
@revathyr13 Yes we have to use the |
Can you please share the documentation so that we will get a better
understanding.
Awaiting your response.
…On Fri, Aug 25, 2023 at 8:17 PM Shubham Gupta ***@***.***> wrote:
@revathyr13 <https://github.com/revathyr13> Yes we have to use the
initcontainer for that.
—
Reply to this email directly, view it on GitHub
<#114 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AM52UUGMBTVAWORLNFJYL2LXXC3GTANCNFSM6AAAAAAZ7R35BY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
It is not published on the website yet but you can review these links : backup : https://github.com/OT-CONTAINER-KIT/redis-operator/tree/master/scripts/backup There is backup.md and restore.md there plus we have manifest, Docker Image, env_vars.env also that would be used in this process. |
Hello @shubham-cmyk I tried passing the restore docker image as init containers. The dump files were restored properly as dump.rdb. But still, restoration was not successful. Let me explain the restoration steps I tried
10.236.70.209:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.\n"} I tried flushdb as well, but that didn't help.
Not sure, if I am missing anything in the restore process. I am attaching the manifest I used. Please have a look and let me know your thoughts. |
@revathyr13 |
Hello @shubham-cmyk , Thanks for the update Version details Source cluster Destination Cluster: |
You should use v7.0.11 for v0.15.0 @revathyr13 |
Thanks for the update. Tried with version v7.0.11 as well. No luck. bash-5.1$ ls -la The above key have true value in source cluster |
Let me inspect this issue what might be the problem |
if the dump.rdb are properly getting placed it mean the scripts are working fine. This might be some issue from the redis part I have to revisit the restore docs via dump.rbd I am replaying the scenario right now will update. |
I just replayed the scenerio the keys were loaded but the cluster was not properly served so all keys were not loaded I am working on this Check there I have added few manifest that i used and fixed a bug so that no restore to the follower pods |
@shubham-cmyk |
Any new updates. |
I have updated the scripts for backup and restore. The restore on the operator : v0.15.1 is failing for now. I have opened a issue : OT-CONTAINER-KIT/redis-operator#625 |
there is new volme mount defined for 0.15.2 version node-conf-redis-clusterwith default 1Mi but that is not defined in values.yaml . is it possible to declare in values .yaml so that we can increase the size
The text was updated successfully, but these errors were encountered: