Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

默认VPC、默认子网下的kubevirt虚机添加安全组无效 #3581

Closed
geniusxiong opened this issue Dec 27, 2023 · 6 comments · Fixed by #3700
Closed

默认VPC、默认子网下的kubevirt虚机添加安全组无效 #3581

geniusxiong opened this issue Dec 27, 2023 · 6 comments · Fixed by #3700
Assignees
Labels
bug Something isn't working

Comments

@geniusxiong
Copy link

Bug Report

默认vpc,默认subnet下,kubevirt虚机添加安全组无效

Expected Behavior

kubevirt虚机添加安全组生效

Steps to Reproduce the Problem

两种情况:

  1. 直接在virt-launcher-i-lra6sgqm-hsv5x Pod的annotation直接添加安全组,是可以生效的。
    直接在Pod上添加安全组,这个安全组是禁止ping功能
    图片
    验证:安全组(禁ping)生效
    图片
    kube-ovn-controller日志
    图片
  2. 删除以上的安全组配置,验证ping的通,说明移除了安全组配置
    图片
    验证ping
    图片
    kube-ovn-controller日志
    图片
  3. 以上是通过直接配置launcher Pod,是没有问题的。但是,如果通过配置kubevirt VirtualMachine的annotation添加安全组(4-7步骤),编辑后重启,virt-launcher的pod会继承安全组配置,但是这时候安全组配置不生效。
  4. 配置kubevirt VirtualMachine
    图片
  5. 配置后,重启虚机,查看launcher pod,已经同步了安全组配置
    图片
  6. 验证安全组(禁ping),但是ping的通,说明安全组没有生效
    图片
  7. 查看kube-ovn-controller日志,似乎没有添加安全组的动作:
    图片

Additional Info

  • Kubernetes version:

    Output of kubectl version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}


- kube-ovn version:

```bash
1.12.4

  • operation-system/kernel version:

    Output of awk -F '=' '/PRETTY_NAME/ { print $2 }' /etc/os-release:
    Output of uname -r:

"NFS Server 4.0 (G193)"
4.19.113-3.nfs.x86_64


<!-- Any other additional information -->
@zbb88888 zbb88888 self-assigned this Jan 4, 2024
@zbb88888 zbb88888 added the bug Something isn't working label Jan 4, 2024
@zbb88888
Copy link
Collaborator

zbb88888 commented Jan 4, 2024

I will look into this.

@geniusxiong
Copy link
Author

I will look into this.

thanks

@wfnuser
Copy link
Contributor

wfnuser commented Jan 22, 2024

I will look into this.

I came across the issue you opened, and it seems like I'm facing the same problem. I'm interested in trying to fix it, but I'm relatively new to kube-ovn. Could you give me some hints or clues to help me get started with investigating and resolving this issue?

I appreciate any assistance or guidance you can provide. Thanks!

@zbb88888
Copy link
Collaborator

@wfnuser thanks, you can try to dive into this issue. I can give you some help if you need it.

@a180285
Copy link

a180285 commented Jan 25, 2024

@wfnuser 估计这里配置不正确的原因和这个 issue 一样:#3498

vm 的 port 被放到了多个 port group 中(比如多次重启 vm 等,导致 kube-ovn 没有正确配置)

@wfnuser
Copy link
Contributor

wfnuser commented Feb 5, 2024

@wfnuser 估计这里配置不正确的原因和这个 issue 一样:#3498

vm 的 port 被放到了多个 port group 中(比如多次重启 vm 等,导致 kube-ovn 没有正确配置)

@a180285 @bobz965 @geniusxiong
根因是 vm annotation 加上并重启之后产生了新的 pod;会创建新的 lsp; 但是发现同名的 lsp 其实已经存在,因此跳过了相关的逻辑;所以在 lsp 中并没有真的绑定上 associated group; sync lsp for sg 的时候就不会加到对应的 port group 中

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants