-
Notifications
You must be signed in to change notification settings - Fork 465
Don't turn off CoreOS updates #89
Comments
The update engine was turned off because the development node user-experience becomes nodes that always download image / restart of first boot, because the vagrant box itself (or ami) is not being updated (just the os inside the VM itself, until it is destroyed). Technically this is fine, but during development it can get in the way when frequently destroying/creating nodes, and consuming off the "alpha" channel which is updated nearly weekly. We can definitely leave issue open so we can track progress toward a better solution than hard-coding. Would exposing the update strategy (https://coreos.com/os/docs/latest/update-strategies.html) as a deployment configurable better suit your use-cases? |
Thanks for the info. Are the clusters created by kube-aws intended for development? It seems odd to optimize for that if not. If there are significant differences in a development cluster and a production cluster, perhaps they should be exposed as two different configurations/flavors. Making it configurable is probably a good idea in any case. |
The initial optimization was around easily deploying a fully-functioning Kubernetes cluster on AWS. The tool can be used for deploying production clusters, but there will generally be customization on the deployers end (e.g. TLS assets). Moving forward, we need to figure out reasonable defaults / more easily expose common configurations (like update strategy). Also, just to note, you can use the kube-aws tool to render completely custom assets, so all configuration can be changed if a specific option is not yet easily exposed (https://github.com/coreos/coreos-kubernetes/tree/v0.1.0/multi-node/aws#custom-kubernetes-manifests). |
+1. Other than TLS (which needs better docs for use in production) and this issue (CoreOS updates disabled), what else is missing from a production deployment? Alternately, if this project is targeted at developers taking CoreOS + Kubernetes for a test drive rather than meant for use in production, please make this absolutely clear in the docs. |
See this issue for a production quality checklist - #340 |
Noticed the same issue using the generic scripts for a custom setup. |
I notice that the cluster install scripts turn off CoreOS's update engine. Why is this necessary? This goes against one of the big benefits and motivations behind CoreOS as I understood them: to keep machines updated automatically and to encourage infrastructure that can withstand the loss of any individual machines (e.g. as they restart for an update.)
I assume there's a good reason updates are disabled for now, but once it's been explained why, I'd like to keep this issue open to track progress towards being able to turn them on.
The text was updated successfully, but these errors were encountered: