-
Notifications
You must be signed in to change notification settings - Fork 558
Conversation
@jchauncey and @CecileRobertMichon please take a look at this approach and weigh in on whether or not this is an improvement. I've made a guinea pig of the The real proposal here is that we settle on a common, thin interface to expose all Kubernetes component runtime config, and then move all existing one-off config vectors into an organized place. For example:
|
i like it. 👍 |
👍👍 I'm definitely in favor of decluttering KubernetesConfig |
pkg/acsengine/engine.go
Outdated
if properties.OrchestratorProfile.KubernetesConfig.KubeletConfig != nil { | ||
for key, val := range properties.OrchestratorProfile.KubernetesConfig.KubeletConfig { | ||
switch key { | ||
case "--cluster-dns": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, what's the reasoning behind naming this "--cluster-dns" vs "cluster-dns" since in this context it's not used as an arg but as a json string?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. The thinking is to transparently use the exact string that we'll be passing to the component at runtime, so that we may eventually just pass it on as-is. Also, --
as a prefix might not be generally valid going forward, it's possible some options might have a -
prefix (or some other prefix).
fda216e
to
07f2c97
Compare
8a4f776
to
b738e05
Compare
pkg/api/types.go
Outdated
PreprovisionExtension *Extension `json:"preProvisionExtension"` | ||
Extensions []Extension `json:"extensions"` | ||
Distro Distro `json:"distro,omitempty"` | ||
KubernetesConfig KubernetesConfig `json:"kubernetesConfig,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is KubernetesConfig part of both OrchestratorProfile and Master+Agent Profile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the purposes of this PR, this is a change that enables master-specific and agent-specific kubelet configs (as opposed to using a common one).
In a longer-term sense, adding KubernetesConfig
to the master/agents separately means that we can do k8s configuration generally on a per-master and per-agent node basis.
I couldn't find a more elegant way to do this from a kubelet config perspective, but I'm open to ideas.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok that makes sense. What about removing KubernetesConfig
from OrchestratorProfile
? In that case KubernetesConfig
would act like vmSize
where masters and agents have their own and in the case of a common one it is duplicated in the master and agent profile.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually we may do that, but that's pretty destructive (everyone's api models out there in the world refer to it). My thoughts for this are:
OrchestratorProfile.KubernetesConfig
= common (all Azure resourses) Kubernetes-related configMasterProfile.KubernetesConfig
= Kubernetes-related config for master nodesAgentPoolProfile.KubernetesConfig
= Kubernetes-related config for agent nodes in an agent pool
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So what happens if I use the following api model?
{
"apiVersion": "vlabs",
"properties": {
"orchestratorProfile": {
"orchestratorType": "Kubernetes",
"kubernetesConfig": {
"maxPods": "110"
}
},
"masterProfile": {
"count": 1,
"dnsPrefix": "",
"vmSize": "Standard_D2_v2",
"kubernetesConfig": {
"maxPods": "220"
}
},
"agentPoolProfiles": [
{
"name": "agentpool1",
"count": 3,
"vmSize": "Standard_D2_v2",
"availabilityProfile": "AvailabilitySet",
"kubernetesConfig": {
"maxPods": "220"
}
}
],
...
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At present, nothing. The operable changes are here:
That copies over the currently supported KubernetesConfig (from OrchestratorProfile) so that it can be consumed thusly:
by the agents:
Just want to jump in with a 👍 The use case we have is to allow setting feature gates, where an administrator can add (in our case) Was happy to see separate configs for kubelet and api-server in the example, not just the kubelet as the issue title suggests. |
@croomes Yes, the example is a sort of high-level expression of why we're doing this with kubelet. To be clear, this PR is for kubelet only, but I'm aiming to refer to it as a model for api server and controller manager generic config implementations as well, hopefully in the next few weeks. |
and to pass tests
c17151c
to
5df5f2a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|"--pod-manifest-path"|"/etc/kubernetes/manifests"| | ||
|"--cluster-domain"|"cluster.local"| | ||
|"--cloud-config"|"/etc/kubernetes/azure.json"| | ||
|"--cloud-provider"|"azure"| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can move up to configurable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, actually: "either because a higher order configuration vector is available that enforces kubelet configuration". In this case we're calling it static because it is overridable by useCloudControllerManager. So I think we want to keep it here in terms of documentation (in other words, we don't want to encourage folks to modify --cloud-provider
, we want them instead to set useCloudControllerManager
to true to bypass azure cloud-provider implementation.
What this PR does / why we need it: exposes a generic key/val interface for configuring kubelet, using the configuration keys documented here:
https://kubernetes.io/docs/reference/generated/kubelet/
kubernetesConfig
properties in favor of using the generic kubelet interface (i.e., if akubernetesConfig
property is basically a thin wrapper around kubelet runtime config, we deprecated; if thekubernetesConfig
property is a higher order configuration declaration that includes kubelet runtime config changes as a side-effect, we retained thekubernetesConfig
property and made the relevant kubelet config property read-only.nodeStatusUpdateFrequency
hardEvictionThreshold
nonMasqueradeCidr
Release note: