-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
installer/pkg/config: Support loading InstallConfig YAML #236
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wking The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@crawford, @abhinavdahiya, while working this up, I had a few thoughts about the
Thoughts? |
"net" | ||
"reflect" | ||
|
||
yaml "gopkg.in/yaml.v2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be using github.com/ghodss/yaml
Kubernetes only does JSON marshal/unmarshal; github.com/ghodss/yaml
does the jsontoyaml.
No need for special YAML parsing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kubernetes only does JSON marshal/unmarshal; github.com/ghodss/yaml does the jsontoyaml.
Adding explicit YAML support to pkg/ipnet
was only 40 lines of non-test additions, so it seemed reasonable to me. Are we sure this is only ever going to be consumed internally?
The immediate motivation was that installer/pkg/config
currently uses github.com/ghodss/yaml, presumably to support legacy properties with different YAML and JSON serialization (like BaseDomain
). If we wanted to only support github.com/ghodss/yaml with ipnet and pkg/types
, then we'd need to import both packages in installer/pkg/config
with distinct names and use the right name depending on whether we wanted the nominally-JSON or nominally-YAML tags for a given (un)marshal. It seemed less confusing to just add support for the library installer/pkg/config was already using.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
installer/installer/pkg/config/cluster.go
Line 76 in 84ab99e
BaseDomain string `json:"tectonic_base_domain,omitempty" yaml:"baseDomain,omitempty"` |
terraform.tfvars
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was done because they didn't want to write code to create
terraform.tfvars
.
So if I drop our direct gopkg.in/yaml.v2
use, do you want me to write separate code to create terraform.tfvars
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
terraform.tfvars
is definitely a separate structure.
we shouldn't have done what we did with installer.Config
to make it directly marshal into terraform.tfvars
.
The installer.Config
ended up with fields like IgnitionMasters
that have yaml:"-"
, as we didn't want user's to specify.
|
||
metav1.ObjectMeta `json:"metadata"` | ||
metav1.ObjectMeta `json:"metadata" yaml:"metadata"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please don't add specific YAML tags, Kubernetes doesn't do that.
I think a single platform level
the config is a one time thing. we do each pool instance type in AWS too. I would want uniformity.
sure, sounds okay.
We should try to have no embedded structs, explicit are more clearer when marshal and pointers make sense only when they are optional or one-of (AWS Or Libvirt). Networking doesn't seem like optional. |
It's still nice to be convenient ;). And with #205, it seems like it might be something that is edited occasionally over the life of the cluster.
But I don't have to tweak those because it's easy to give them sane defaults. I do have to provide an image path, at least until we get code that automatically grabs this from the network (and caches it somewhere for future runs?) when the caller doesn't provide a path.
I'm fine with AWS-wide defaults for those as well, if you're ok with the cross-pool fallback approach. Should I add a platform:
libvirt:
uri: qemu:///system
defaultMachinePlatform:
qcowImagePath: /path/to/default-image
machines:
- name: master
replicas: 1
platform:
libvirt:
qcowImagePath: /path/to/special-case-image
- name: worker
replicas: 2
No? We currently have defaults for all of its properties, and making it |
We default, so that all the operators that depend on networking block don't need to separately make decisions about the defualts. |
I'm fine if we make that default for all platforms. |
This makes sense to me when we write to the cluster (#205). I'm less clear on whether it makes sense in all cases. What if someone wants to use our package to write a cluster config before feeding it into the installer? We don't want to have to make them create their own defaults for all of these settings. |
Only new |
I think you're thinking of other consumers who are loading an
You want to make sure (4) doesn't need local defaults. I'm fine with that. I'd also like (1) to not need local defaults, and for that, I think we need |
@@ -42,6 +36,126 @@ func ParseConfig(data []byte) (*Cluster, error) { | |||
return &cluster, nil | |||
} | |||
|
|||
func parseInstallConfig(data []byte, cluster *Cluster) (err error) { | |||
installConfig := &types.InstallConfig{} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if the defaulting, and validation were both extracted separately here such that external callers could vendor and use that code. We're still trying to determine if we can re-use your InstallConfig or we have to use our own type and generate InstallConfig, but it might be good to do this regardless for futureproofing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice if the defaulting, and validation were both extracted separately here such that external callers could vendor and use that code.
New external consumers should be using pkg/types
directly; everything under installer/pkg
should be considered deprecated. Having a validator for these types under pkg/
makes sense (currently pkg/asset/installconfig
has user-input-time validation, but that's not useful for downstream Go consumers reading this from the cluster, e.g. openshift/machine-config-operator#47). And yeah, we'll probably need to expose the defaulting for tools that are monitoring InstallConfig to pick up post-install changes. But that will also be a pkg/
change, and this PR is trying to focus on getting support in the legacy installer/pkg/config
(which has its own validation and defaulting approach). Can I punt the new APIs to follow-up work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally fine with me.
} | ||
} | ||
default: | ||
return fmt.Errorf("unrecognized machine pool %q", machinePool.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These names feel more like types. Additionally it looks like there's not a lot of validation here, do we require two pools with these exact names? What should happen if we see multiple worker or master pool names?
Maybe too preliminary for that but just wanted to mention.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These names feel more like types.
I agree. @abhinavdahiya, can I replace:
// Machines is the list of MachinePools that need to be installed.
Machines []MachinePool `json:"machines"`
with:
Master MachinePool `json:"master"
Worker MachinePool `json:"worker"
or similar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would leave us with a little issue in Hive. If we use InstallConfig directly in our ClusterDeployment, this would be the long term definition of all machine pools. If we go to just one master, and one worker, then we would need to duplicate this info higher up in our own portion of ClusterDeployment for the long term definition of what machine pools should exist. That would mean a little duplication of data, and a strange section of install config that was only used during install.
Kinda feels like this should be capable of being the canonical source of machine sets that should exist in the cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wking I think we decided that machinepools remain as is for now. no changes. we can expand them if needed later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we go to just one master, and one worker...
Just to be clear, I was suggesting one master pool (and one worker pool).
I think we decided that machinepools remain as is for now. no changes
So just die screaming if master
(say) is listed multiple times or not listed at all? And silently ignore entries with names that aren't master
or worker
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest leaving name a freeform field, add an explicit type, technically the two types should probably be "control-plane" and "compute", verify you have exactly one control-plane, and at least one compute pool.
13e0f52
to
2faf6de
Compare
2faf6de
to
3de9cef
Compare
3de9cef
to
41f66f4
Compare
41f66f4
to
a87bab7
Compare
Rebased around #119 with 41f66f4 -> a87bab7. |
Looks like something just hung. In case it was a flake, I'll kick it again: /retest but this may be something I've broken ;). |
For folks using gopkg.in/yaml.v2. Folks using github.com/ghodss/yaml will use the JSON (un)marshallers. But supporting yaml.v2 is convenient for consumers who are already importing that package (e.g. because the want different JSON and YAML serializations).
I'd missed this while rerolling 89f05da (pkg/types/installconfig: Add AWSPlatform.UserTags, 2018-09-12, openshift#239). I've also updated the UserTags comment to use "for the cluster" instead of "by the cluster". Resources can be created by the installer (not part of the cluster) or by operators living inside the cluster, but regardless of the creator, these cluster resources should get UserTags.
github.com/ghodss/yaml leans on JSON tags and (un)marshallers, but yaml.v2 depends on distinct yaml tags like the ones I'm adding here. This commit also adds a number of omitempty tags for a number of properties that could have reasonable defaults. This keeps the output more compact when serializing a partially-initialized structure.
This is now the OpenShift installer, not the Tectonic installer. And the context of the examples is clear enough without the prefix.
So callers can start transitioning to the new approach. For a short transitional period, we'll continue to support the old config format as well.
Generated with: $ bazel run //:gazelle using: $ bazel version Build label: 0.16.1- (@Non-Git) Build target: bazel-out/k8-opt/bin/src/main/java/com/google/devtools/build/lib/bazel/BazelServer_deploy.jar Build time: Mon Aug 13 16:42:29 2018 (1534178549) Build timestamp: 1534178549 Build timestamp as int: 1534178549
a87bab7
to
48f3a4f
Compare
Smoke passed, so the e2e-aws:
must be a flake. /retest |
So callers can start transitioning to the new approach. For a short transitional period, we'll continue to support the old config format as well.
There are a few smaller commits in this PR to setup for the meat in the final commit. Details in the commit messages.
I've only run the Go unit tests locally, so I've stuck a WIP tag on this PR until we get some through the integration tests; I wouldn't be surprised if lots of things are still broken ;).
CC @dgoodwin.