-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Openstack support for non loadbalancer deployments #6599
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: drekle If they are not already assigned, you can assign the PR to them by writing The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@@ -221,7 +221,7 @@ func (b *FirewallModelBuilder) addHTTPSRules(c *fi.ModelBuilderContext, sgMap ma | |||
addDirectionalGroupRule(c, masterSG, nodeSG, httpsIngress) | |||
addDirectionalGroupRule(c, masterSG, masterSG, httpsIngress) | |||
|
|||
if b.UseLoadBalancerForAPI() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This utility method checks whether cluster.Spec.API.LoadBalancer
is nil. I am unsure of the conditions in which this is true, however in create cluster it seems this is always set to an empty struct if the following conditions are true:
return s.DNS == nil && s.LoadBalancer == nil
I am using gossip without a loadbalancer in this scenario, connecting to one of the 3 masters. Gossip should still reference all masters internally, however if a master did die in this setup a user might have to change his kubeconfig manually. I would not consider this to be a production ready configuration which makes me question whether these changes should be warranted at all.
/hold I havent tested update after update. |
for me it looks okay. However, I must test this in following scenarios:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I cannot provision new cluster with this PR using lbaas.
% ~/go/bin/kops update cluster --name testing.k8s.local --yes
I0311 13:40:53.916175 82736 apply_cluster.go:559] Gossip DNS: skipping DNS validation
I0311 13:40:54.247802 82736 executor.go:103] Tasks: 0 done / 111 total; 45 can run
I0311 13:40:56.449430 82736 executor.go:103] Tasks: 45 done / 111 total; 43 can run
I0311 13:40:57.796527 82736 executor.go:103] Tasks: 88 done / 111 total; 8 can run
I0311 13:40:58.326076 82736 executor.go:103] Tasks: 96 done / 111 total; 6 can run
I0311 13:41:01.542006 82736 executor.go:103] Tasks: 102 done / 111 total; 1 can run
I0311 13:41:01.823334 82736 executor.go:103] Tasks: 103 done / 111 total; 4 can run
W0311 13:43:03.000680 82736 executor.go:130] error running task "FloatingIP/fip-bastions-1-testing-k8s-local" (7m58s remaining to succeed): Failed to associate floating IP to instance fip-bastions-1-testing-k8s-local
I0311 13:43:03.000748 82736 executor.go:103] Tasks: 106 done / 111 total; 5 can run
command to create cluster was
~/go/bin/kops create cluster \
--cloud openstack \
--name testing.k8s.local \
--state ${KOPS_STATE_STORE} \
--zones zone-1 \
--network-cidr 10.1.0.0/16 \
--image debian-9-openstack-amd64 \
--master-count=3 \
--node-count=2 \
--node-size m1.medium \
--master-size m1.small \
--etcd-storage-type nova \
--api-loadbalancer-type public \
--topology private \
--bastion \
--networking weave \
--os-kubelet-ignore-az=true \
--os-ext-net int-net \
--os-ext-subnet int-v4-pub-subnet \
--os-lb-floating-subnet int-v4-subnet --os-dns-servers 8.8.8.8,8.8.4.4
Trying to create cluster without LB:
% ~/go/bin/kops create cluster \
--cloud openstack \
--name testing.k8s.local \
--state ${KOPS_STATE_STORE} \
--zones zone-1 \
--network-cidr 10.1.0.0/16 \
--image debian-9-openstack-amd64 \
--master-count=3 \
--node-count=2 \
--node-size m1.medium \
--master-size m1.small \
--etcd-storage-type nova \
--topology private \
--bastion \
--networking weave \
--os-kubelet-ignore-az=true \
--os-ext-net int-net \
--os-ext-subnet int-v4-pub-subnet \
--os-lb-floating-subnet int-v4-subnet --os-dns-servers 8.8.8.8,8.8.4.4
W0311 13:44:51.196328 82777 create_cluster.go:720] Running with masters in the same AZs; redundancy will be reduced
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x358d407]
goroutine 1 [running]:
main.RunCreateCluster(0xc420c37be0, 0x46e9460, 0xc4200ea008, 0xc420ed9800, 0x0, 0x0)
/Users/x/go/src/k8s.io/kops/cmd/kops/create_cluster.go:949 +0x6bb7
main.NewCmdCreateCluster.func1(0xc420453900, 0xc420c5f200, 0x0, 0x22)
/Users/x/go/src/k8s.io/kops/cmd/kops/create_cluster.go:276 +0x16a
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).execute(0xc420453900, 0xc420c5eb40, 0x22, 0x24, 0xc420453900, 0xc420c5eb40)
/Users/x/go/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:760 +0x2c1
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x64891e0, 0x397f100, 0x0, 0x0)
/Users/x/go/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:846 +0x30a
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).Execute(0x64891e0, 0x64bf700, 0x0)
/Users/x/go/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:794 +0x2b
main.Execute()
/Users/x/go/src/k8s.io/kops/cmd/kops/root.go:97 +0x87
main.main()
/Users/x/go/src/k8s.io/kops/cmd/kops/main.go:25 +0x20
Ok, I took --os-lb-floating-subnet
away because we are not using lb here (anyways that could be checked in code).
Now the result is
~/go/bin/kops update cluster --name testing.k8s.local --yes
I0311 13:47:43.621196 82845 apply_cluster.go:559] Gossip DNS: skipping DNS validation
I0311 13:47:43.937797 82845 executor.go:103] Tasks: 0 done / 105 total; 44 can run
I0311 13:47:44.388135 82845 vfs_castore.go:729] Issuing new certificate: "ca"
I0311 13:47:44.500190 82845 vfs_castore.go:729] Issuing new certificate: "apiserver-aggregator-ca"
I0311 13:47:50.993503 82845 executor.go:103] Tasks: 44 done / 105 total; 42 can run
I0311 13:47:51.443936 82845 vfs_castore.go:729] Issuing new certificate: "kubelet"
I0311 13:47:51.641355 82845 vfs_castore.go:729] Issuing new certificate: "apiserver-proxy-client"
I0311 13:47:51.755418 82845 vfs_castore.go:729] Issuing new certificate: "kubelet-api"
I0311 13:47:51.765450 82845 vfs_castore.go:729] Issuing new certificate: "kops"
I0311 13:47:51.876738 82845 vfs_castore.go:729] Issuing new certificate: "apiserver-aggregator"
I0311 13:47:52.017089 82845 vfs_castore.go:729] Issuing new certificate: "kubecfg"
I0311 13:47:52.147430 82845 vfs_castore.go:729] Issuing new certificate: "kube-proxy"
I0311 13:47:52.177558 82845 vfs_castore.go:729] Issuing new certificate: "kube-scheduler"
I0311 13:47:52.327923 82845 vfs_castore.go:729] Issuing new certificate: "kube-controller-manager"
I0311 13:47:53.624959 82845 executor.go:103] Tasks: 86 done / 105 total; 8 can run
I0311 13:48:00.437423 82845 executor.go:103] Tasks: 94 done / 105 total; 6 can run
I0311 13:48:03.969633 82845 executor.go:103] Tasks: 100 done / 105 total; 5 can run
I0311 13:48:09.608286 82845 context.go:231] hit maximum retries 4 with error GetFloatingIP: fetching floating IP failed: Resource not found
I've fixed this, yet to push. I cannot do an update after an update for some reason yet in my testing. Continuing to hold.
|
@zetaab Please take a look at your convenience. This was more complex than I thought.
|
/hold cancel |
…ting subnet task to compare objects which cannot change in value instead of pointers GoVet updates
/test pull-kops-verify-gofmt |
/retest |
1 similar comment
/retest |
@drekle: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
closing in favor of #6608 which is cleaned up and does not have this gofmt issue |
resolves:
#6593
#6592
The loadbalancer can be disabled by not specifiying an api loadbalancer type.
Non loadbalancer deployments will generate a certificate with the 3 master floating IP addresses.
Kubeconfig will choose arbitrarily between each master.
/sig openstack