Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-introduce the check if the number of VFs is 0 #472

Merged

Conversation

SchSeba
Copy link
Collaborator

@SchSeba SchSeba commented Jul 12, 2023

now that the issue with kubelet with fixed all the way to k8s 1.25 we can add again the check if the number of existing vfs is 0 there is no need to drain the node because there are no workloads running with vfs from that nic

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@SchSeba SchSeba force-pushed the remove_drain_on_none_vfs branch from aa401d8 to 7af9cf7 Compare July 12, 2023 14:14
@coveralls
Copy link

coveralls commented Jul 12, 2023

Pull Request Test Coverage Report for Build 5541150289

  • 3 of 5 (60.0%) changed or added relevant lines in 1 file are covered.
  • 5 unchanged lines in 2 files lost coverage.
  • Overall coverage decreased (-0.05%) to 24.473%

Changes Missing Coverage Covered Lines Changed/Added Lines %
pkg/plugins/generic/generic_plugin.go 3 5 60.0%
Files with Coverage Reduction New Missed Lines %
controllers/sriovnetwork_controller.go 2 61.9%
api/v1/helper.go 3 41.67%
Totals Coverage Status
Change from base Build 5530349517: -0.05%
Covered Lines: 2079
Relevant Lines: 8495

💛 - Coveralls

@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@SchSeba
Copy link
Collaborator Author

SchSeba commented Jul 12, 2023

cc @adrianchiris @e0ne @zeeke @Eoghan1232 please take a look :)

@SchSeba SchSeba force-pushed the remove_drain_on_none_vfs branch from 7af9cf7 to b15ceab Compare July 12, 2023 15:29
@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@SchSeba
Copy link
Collaborator Author

SchSeba commented Jul 12, 2023

Thanks @zeeke can you give another round?

Copy link
Member

@zeeke zeeke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@Eoghan1232 Eoghan1232 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

// once https://github.com/kubernetes/kubernetes/issues/109595 will be fixed
if ifaceStatus.NumVfs == 0 {
glog.V(2).Infof("generic-plugin needDrainNode(): no need drain, for PCI address %s current NumVfs is 0", iface.PciAddress)
break
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: return here like we do below ? WDYT ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no we can't because we need to continue and check all the other status vs desire.

in the NeedUpdate case we return because if we find even one that needs to update we need to drain, but that is not the case if we find one that doesn't need to drain we need to check for all of them

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, didnt see that nested loop. (it was not showing in browser) :\

yes you are right !

Copy link
Collaborator

@adrianchiris adrianchiris left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small nits, i see it already got 2 LGTMs,

up 2 u if you want to address b4 merging this one :)

now that the issue with kubelet with fixed all the way to k8s 1.25
we can add again the check if the number of existing vfs is 0
there is no need to drain the node because there are no workloads
running with vfs from that nic

Signed-off-by: Sebastian Sch <[email protected]>
@SchSeba SchSeba force-pushed the remove_drain_on_none_vfs branch from b15ceab to 6b946e9 Compare July 13, 2023 09:00
@github-actions
Copy link

Thanks for your PR,
To run vendors CIs use one of:

  • /test-all: To run all tests for all vendors.
  • /test-e2e-all: To run all E2E tests for all vendors.
  • /test-e2e-nvidia-all: To run all E2E tests for NVIDIA vendor.

To skip the vendors CIs use one of:

  • /skip-all: To skip all tests for all vendors.
  • /skip-e2e-all: To skip all E2E tests for all vendors.
  • /skip-e2e-nvidia-all: To skip all E2E tests for NVIDIA vendor.
    Best regards.

@SchSeba
Copy link
Collaborator Author

SchSeba commented Jul 13, 2023

Thanks for the quick review folks! merging

@SchSeba SchSeba merged commit 6a6f8d8 into k8snetworkplumbingwg:master Jul 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants