-
-
Notifications
You must be signed in to change notification settings - Fork 471
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Create/Restore Cluster Snapshots #160
Comments
Hi there, thanks for opening this issue. I'd be happy to review any pull-request from your side and will have another look into this issue once I have some more time 👍 |
Ok I will do my best, let's see what happens. |
Is there any progress on this, or something similar? I would also be interested in the functionality. If not, I would be interested in giving it a try as well, though I could not get to it in the next 2-3 weeks. |
Pĺease do, I have too much on my plate right now ( 2 jobs since Dec/2020) , so I unfortunatly I couldn't do anything. |
I just had a few more thoughts on this and now here are some things to note:
|
I will give it a try!, that would be enough for me since k3s is our local env and only has one node. |
@cfontes , did you have any success so far? |
@iwilltry42 I executed your proposal of single cluster, but when creating the cluster again, k3d complains as follows :
at least in version:
On the other hand, I am trying by simply doing a snapshot of the server container and using it as the image for creating the new cluster ( |
Scope of your request
Be able to create snapshots for complex clusters and restore them at will
I think this is very useful for clusters with stateful sets that take long to be created, in my case my Local Kafka + Zookeeper cluster takes around 10 minutes to be fully configured and populated, but I only need to do that once every couple months.
Describe the solution you'd like
This project is extremely helpful, I opted to use it instead of plain k3s because I saw a possibility to use
docker commit
as a snapshot tool, so I could iterate fast.In case I break something I don't care too much about, I could just restart from that snapshot I commited and start adding my bugs to my code base again, very fast.
If it was a k3d native command it would be perfect but docker is fine for now
Describe alternatives you've considered
I tried and succeeded in creating the snapshot from a working k3d cluster with
docker commit -m "snapshot" "$(docker ps --filter name=k3d-k3s-local-server -q)" rancher/k3s:v0.10.0-snapshot
After that I run
k3d delete -a
and
docker run 53cb9ed4ec58
but I fail to restore my cluster to the initial state.
I can create a PR for this later but I am in need of some guidance on what needs to be done for this kind of approach to succeed.
In the beginning this
docker commit
anddocker run
approach would already be very useful if it worked.The current error I see when starting a single server cluster with no agents is
Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
So I am missing some mount point, I am just not sure what I need to manually recreate related to this k3s-io/k3s#495, I guess
k3d delete
is removing this mountThe text was updated successfully, but these errors were encountered: