Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement Request: Avoid use the "calico_iptables_backend=Auto" in Centos 8 #9005

Closed
yankay opened this issue Jun 20, 2022 · 8 comments · Fixed by #10417
Closed

Enhancement Request: Avoid use the "calico_iptables_backend=Auto" in Centos 8 #9005

yankay opened this issue Jun 20, 2022 · 8 comments · Fixed by #10417
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@yankay
Copy link
Member

yankay commented Jun 20, 2022

To avoid use the "calico_iptables_backend=Auto" in Centos 8 , Oracle Linux 8, RockyLinux8 and RHEL 8,
Should we do more effect to make things better.

Option1:
Change the "calico_iptables_backend" default value to "NFT" in Centos/RHEL/... 8
And keep the "calico_iptables_backend: Auto" in other OS.
I recommand this method. It avoid the error configure, make network more stable.

Option2:
Add a pre-install-check. If using Auto in Centos8, it stop the install process.
I think it has a probelm. there may be some special case, some user may want to specify "Auto" in Centos8.

Option3:
Only use document to suggest user. And wait for the calico fix the auto detection bug.
It maybe take a long long time. (the isssue has been 2 years).

Risk:
If calico use auto mode (auto dectect to legacy) before, when we change the config to "NFT", the network would be broken.
It need reboot the node to make network work again.

How do we think , which option should we choose ?

Issues:

@yankay yankay added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 20, 2022
@cristicalin
Copy link
Contributor

I would go with option 3, workarounds like 1 and 2 tend to stick around for too long past their due time.

@yankay
Copy link
Member Author

yankay commented Jun 22, 2022

I would go with option 3, workarounds like 1 and 2 tend to stick around for too long past their due time.

Thank you for suggestion.

@cyclinder
Copy link
Contributor

Is there a way for kubespray to know the version of the node iptables? Like shown below:

[root@node1 ~]# cat /etc/redhat-release
CentOS Linux release 8.2.2004 (Core)
[root@node1 ~]# iptables --version
iptables v1.8.4 (nf_tables)

If the version of iptables on node is nft_tables, We should set the value of calico_iptables_backend is NFT, Both need to be consistent. We might be able to do something about it in kubespray :)

@cristicalin
Copy link
Contributor

@cyclinder kubespray trying to do the right thing usually ends up backfiring. If you know your environment is affected by a particular issue you can override the specific variable, as guided by documentation, in your ansible inventory variables or group variables.

I suggest taking the documentation approach and letting deployers or tools built on top of kubespray do the right thing

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 19, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants