-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Update Operator #241
Comments
All questions are really on spot but many pieces are still moving, so I'll try to give an overview of the current state (which may change soonish). For reference, the historical decisions behind this are recorded at #3.
That isn't the intended usage, no. The scope of airlock is just to replace the same logic in locksmith, which only supported etcd as distributed backend. The usecase is for machines that already have direct access to an etcd cluster, likely without any access to the objects of an higher-level orchestrator. If you have to deploy an etcd cluster just for airlock, then there are better options to consider.
That's the idea, yes. But we don't plan to write orchestrators for each possible backend on our own, nor shove all of those into airlock. As of this date, we are still stabilizing the basics of auto-updates, so fleet-wide orchestration is still on the development radar. The protocol is currently drafted at coreos/airlock#1, while the client-support in Zincati is tracked at coreos/zincati#37. |
@lucab Thanks for the overview! I am glad there will be similar functionality in the future. |
Right now in Red Hat OpenShift we have the machine-config-operator (mco) for this. |
@lucab coreos/zincati#37 now seems closed, would you be open to sharing what the current state is? 😍 |
Related inquiry: coreos/zincati#214. |
@MPV I've left a few cross-links in place, so if you want to explore more feel free to click-through. However, below is a quick summary of the current status.
Circling back to my original reply, now we are basically at this point:
|
I do not get the point why airlock was done with etcd instead of k8s as a backing store. I think airlock should actually be configurable to use k8s locking mechanisms. Edit: the question is also, what happens if airlock is only installed on 1 node and the node restarts, does the lock still stands or does the node retries until the airlock server is up again? if the latter is the quase, it will probably be really simple to create a good k8s integration. |
This is recorded with actual historical details and technical discussions at #3, feel free to go through it. The TLDR is "because it replaces locksmith etcd strategy". Also, please beware that k8s API does not model a database with strongly consistent primitives (e.g. old HA clusters without "etcd quorum read" do return stale reads).
That's understandable, but its design scope is explicitly not covering it. The client->server protocol itself is documented at https://github.com/coreos/airlock/pull/1/files and designed to be easy to implement as small web-service on top of any consistent database. |
the pr actually points to a rough explanation. not to a "protocol documentation". |
Just saw this new project being worked on by Rancher to be a more generic upgrade operator not just rancher specific. Wonder if it could be enhanced to work with Fcos upgrades. It might even be able to work as it is, need to dig into it more. https://github.com/rancher/system-upgrade-controller |
@lukasmrtvy Excellent question! @lucab, do you know what that text is about? |
Looks like that text was part of our annoucement lauch FAQ posted in June of 2018, so it may have been a little misguided or incorrect in retrospect. |
Bunch of updates:
|
https://github.com/poseidon/fleetlock implements Zincati's FleetLock protocol on Kubernetes. Its small, nothing fancy (no drain). |
It actually have drain support now |
Currently on CoreOS Container Linux we make use of the container linux update operator to orchestrate the updates (restart) of our Kubernetes cluster nodes based on it's configuration and agent integrating with locksmith. Will there be an equivalent for Fedora Coreos that can be deployed to a Kubernetes cluster and work with zincati to orchestrate updates?
I noticed the airlock project which can run as a container and needs to connect to an etcd3 server (cluster) but while running under kubernetes we already have etcd nodes but cannot give access to those (policy). Does this mean we are required to run another etcd cluster just for updates or is it possible to make use of kubernetes objects to orchestrate the updates using an operator?
The text was updated successfully, but these errors were encountered: