-
Notifications
You must be signed in to change notification settings - Fork 218
Conversation
Can one of the admins verify this patch? |
Sorry this is lingering so long. Was waiting to see outcome of PR to flannel-cni (also @dghubble is on vacation for a bit) |
Responded in coreos/flannel-cni#5 (comment). Once that merges and a new version of flannel-cni is published, we may want to validate that we haven't broken anything by first bumping the version here, showing it still works, then (later PR) switching to the new CNI config to enable host port. |
Added #705 to show flannel-cni:v0.3.0 is backwards compatible. In this PR, can you update to the new release and use the new default config which has portmap, thanks. |
Done.. |
Can you rebase please? |
ok to test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Passes tests and doesn't regress, but I haven't verified this fixes hostPort as claimed. |
Wait, @klausenbusk, the CNI_CONF_NAME override was to show that the bump was backwards compatible. When rebasing and updating to the new CNI config, you previously noted the name had to be the conflist variant, so I think you'll now want to remove the name and use the default. Needs rebase and minor tweak. |
Wait, @klausenbusk, the CNI_CONF_NAME override was to show that the bump was backwards compatible. When rebasing and updating to the new CNI config, you previously noted the name had to be the conflist variant, so I think you'll now want to remove the name and use the default.
I'm not sure what you mean? I have already removed `CNI_CONF_NAME`.
|
With this change, hostPort is finally working. Fix #662
Rebase done |
ok to test |
* Fixes hostPort pod functionality * kubernetes-retired/bootkube#697 * kubernetes/kubernetes#23920
* Fixes hostPort pod functionality * kubernetes-retired/bootkube#697 * kubernetes/kubernetes#23920
I'm not so confident in this change or these green tests yet. I'm seeing a lot of flakiness and the fact different users of bootkube are using different plugins (hyperkube vs host) I still can't verify this works properly across environments. Edit: Admittedly there are a bunch of issues with download speeds in the lab. |
If we wanted to we could switch all the bootkube examples to use the bin dir provided by the CNI plugin itself (and not use the hyperkube fs copies). Should just be a matter of adding a |
I did get this up using the on-host CNI plugins (which I think we should try to use everywhere, but weren't being used where I was testing), but creating the reproduction case shown in kubernetes/kubernetes#23920, netstat shows no host port binding. @klausenbusk what was your verification? |
@aaronlevy The hack examples are bind mounting it in the default location, it should be in use even without the flag (though I don't usually use the hack examples so not 100% sure). In other environments like Matchbox and Tectonic, the hyperkube copies are being used. I wasn't aware of this until recently, it wasn't my intent in Matchbox at least. |
I'm using
|
Same here, but it works. I'm not sure why we can't see the bind through. |
It seems like it is handled by a firewall rule:
|
Makes sense, |
Oh, btw, above I described how after this change, projects using bootkube must use the host's CNI plugins or a really new hyperkube. If not you'll get Tectonic, Matchbox, and others need to change before updating or use the fact we allowed this change to be backwards compatible (but not get hostPort). cc @squat |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works alright for me.
cc @aaronlevy @squat for how Tectonic wants to proceed.
Does calico's |
Deploying the Calico policy-only addon is taking over as the alphabetically first CNI config, so I'd expect with calico, hostPort would continue to not function properly. I've noted the problems with this setup in #698, I'm not quite sure how we keep balancing the two. I personally use Calico networking instead, which I'm hoping to add soon. |
I made these changes to get it to work with the experimental calico policy: diff --git c/bootkube-system/manifests/kube-calico-cfg.yaml i/bootkube-system/manifests/kube-calico-cfg.yaml
index a709907..4a625ac 100644
--- c/bootkube-system/manifests/kube-calico-cfg.yaml
+++ i/bootkube-system/manifests/kube-calico-cfg.yaml
@@ -9,20 +9,30 @@ data:
{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
- "type": "calico",
- "log_level": "debug",
- "datastore_type": "kubernetes",
- "nodename": "__KUBERNETES_NODE_NAME__",
- "ipam": {
- "type": "host-local",
- "subnet": "usePodCidr"
- },
- "policy": {
- "type": "k8s",
- "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
- },
- "kubernetes": {
- "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
- "kubeconfig": "__KUBECONFIG_FILEPATH__"
- }
+ "plugins": [
+ {
+ "type": "calico",
+ "log_level": "debug",
+ "datastore_type": "kubernetes",
+ "nodename": "__KUBERNETES_NODE_NAME__",
+ "ipam": {
+ "type": "host-local",
+ "subnet": "usePodCidr"
+ },
+ "policy": {
+ "type": "k8s",
+ "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"
+ },
+ "kubernetes": {
+ "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",
+ "kubeconfig": "__KUBECONFIG_FILEPATH__"
+ }
+ },
+ {
+ "type": "portmap",
+ "capabilities": {
+ "portMappings": true
+ }
+ }
+ ]
}
diff --git c/bootkube-system/manifests/kube-calico.yaml i/bootkube-system/manifests/kube-calico.yaml
index 64224e5..43473b1 100644
--- c/bootkube-system/manifests/kube-calico.yaml
+++ i/bootkube-system/manifests/kube-calico.yaml
@@ -78,6 +78,8 @@ spec:
image: quay.io/calico/cni:v1.10.0
command: ["/install-cni.sh"]
env:
+ - name: CNI_CONF_NAME
+ value: 10-calico.conflist
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef: However I had to manually delete the old |
Can you add that as a separate PR? On master, neither flannel nor canal support hostPort, so I don't think this regresses. |
|
v0.3.0 is referring to the CNI spec version in the config, rather than the
release version.
…On Sep 15, 2017 4:11 AM, "Kristian Klausen" ***@***.***> wrote:
or a really new hyperkube
v0.6.0 hasn't got into k8s yet, and won't before at least v1.9.0. So
people must use the host CNI binaries.
kubernetes/kubernetes#49480
<kubernetes/kubernetes#49480>
Portmap require this fix (and maybe a few more):
containernetworking/plugins#23
<containernetworking/plugins#23> which is
included in v0.6.0.
You might consider just leaving the version at 0.3.0 for now (haven't
tested) to be a little more lenient.
portmap require v0.6.0, it won't work with older version due to some bugs.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#697 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACJidMNHOXMgzx_6dqIWCLSbVbHDespZks5siltJgaJpZM4PGfZ5>
.
|
I'm aware of that and I changed my comment afterwards:
In other words: |
Sounds fine then, thanks for the clarifications. There are no loopholes, projects must switch to using on-host CNI plugins. I've changed all Matchbox clusters to do this. Poking Tectonic. |
Tectonic may opt to keep their current behavior (hostPort not working) and avoid the immediate need to migrate to using the on-host plugin (instead of the hyperkube ones) following my compatability note in coreos/flannel-cni#5 (comment). |
@dghubble can you open an issue in the tectonic jira to track this? |
Yep, mentioned you on the issue. |
With this change, hostPort is finally working.
Fix #662
Require: coreos/flannel-cni#5 (due to the file extension change).
cc @aaronlevy
I have tested this on my bootkube cluster.