Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube fails on KVM2 on fresh minikube installation on Fedora 33 (with solution) #10794

Closed
kmajcher-rh opened this issue Mar 12, 2021 · 12 comments
Labels
co/kvm2-driver KVM2 driver related issues kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-solution-message Issues where where offering a solution for an error would be helpful os/linux priority/backlog Higher priority than priority/awaiting-more-evidence. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@kmajcher-rh
Copy link

Steps to reproduce the issue:

  1. Follow instructions from https://github.com/kubevirt/demo
  2. Install minikube instructions from https://github.com/kubernetes/minikube/ to install minikube
  3. Run minikube with:
    $ minikube config set vm-driver kvm2
    $ minikube start --memory 4096

Full output of failed command:
Exiting due to PROVIDER_KVM2_ERROR: /usr/bin/virsh domcapabilities --virttype kvm failed

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

The resolution for the problem was doing two things:

  1. Adding systemd.unified_cgroup_hierarchy=0 in the kernel command line (via grubby).
  2. Adding my users to appropriate libvirt and kvm groups
    sudo usermod -aG kvm $USER
    sudo usermod -aG libvirt $USER

Perhaps it's worth adding somewhere to documentation, but i don't know where - there is no page with "common installation problems".

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2021

Hmm, wonder why you need to change to cgroups v1 when running the "libvirt" driver ? (or at all, for that matter)

i.e. systemd.unified_cgroup_hierarchy=0

Adding your user to the libvirt group should be detailed in the Fedora documentation, but also mentioned here:

https://minikube.sigs.k8s.io/docs/drivers/kvm2/

It was supposed to be fixed here: #5617

We don't want it to ask for password every time.


EDIT: Apparently Fedora prefers running it as root:

https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/

In the Ubuntu documentation, they describe the group:

https://ubuntu.com/server/docs/virtualization-libvirt

If you run this command, does it recognize the authentation failure at all ?

virt-host-validate

Or does it only recognize the hardware and kernel settings, perhaps. Maybe:

virsh version --daemon

@afbjorklund afbjorklund added co/kvm2-driver KVM2 driver related issues os/linux kind/documentation Categorizes issue or PR as related to documentation. labels Mar 12, 2021
@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2021

Since the libvirt daemon will be doing the qemu-kvm calls, I'm not sure if your user needs to be in the "kvm" group ?

sudo usermod -aG kvm $USER

@kmajcher-rh
Copy link
Author

Regarding virt-host-validate - i had several errors there before i added "systemd.unified_cgroup_hierarchy=0".

But only after I've added my users to kvm and libvirt groups minikube started working for me. I added it to both groups, so I'm not sure if it would be enough to just add to libvirt group.

I opened this ticket to save couple hours of time i lost, trying to set up something that I was told will be "two minutes to set up" :)
Hopefully even this thread will help.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2021

As far as I know, it should be enough with the "libvirt" group - but it would be great to get a confirmation.

The cgroups v1 used to be needed in order to run Docker and Kubernetes locally on the host (not in a VM),
but the latest versions should work also with cgroups v2 - even if there are lots of issues on Fedora still...

https://fedoraproject.org/wiki/Changes/CGroupsV2

  • libvirt: The team is already working on this (DONE in libvirt 5.5.0)

Thanks for opening the ticket, it is supposed to work out-of-the-box with both drivers ("kvm2" and "docker")

Currently there is no automatic testing of minikube on Fedora, so it would need to be provided by users.
There is some discussions in other items, but hopefully this reminder to add to libvirt group will be fixed:

https://github.com/kubernetes/minikube/pull/10712/files

@afbjorklund afbjorklund added needs-solution-message Issues where where offering a solution for an error would be helpful triage/duplicate Indicates an issue is a duplicate of other open issue. labels Mar 12, 2021
@afbjorklund
Copy link
Collaborator

trying to set up something that I was told will be "two minutes to set up" :)

that used to be the case with VirtualBox, it should not be worse with the new

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2021

The main problem with KubeVirt is otherwise the nested virtualization...

Since it will use Kubernetes to start new VMs (!) rather than Pods,
running Kubernetes itself in a VM (like libvirt) complicates things.

https://kubevirt.io/user-guide/operations/installation/

But otherwise you would have to use a separate (physical) computer,
and run the cluster remotely on it. So it might be worth the trouble.

https://docs.fedoraproject.org/en-US/quick-docs/using-nested-virtualization-in-kvm/

@kmajcher-rh
Copy link
Author

I can confirm that when I removed user from the kvm group, minikube works.
So adding user to libvirt group did the trick for me.

What would you recommend to do to address this issue and help people with similar problem?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 12, 2021

We need to make it easier for people to verify that the underlying solution is sound,
whether it is VirtualBox or libvirt or Docker. It needs to be working OK first, I think ?

Ideally, their documentation would be awesome and we would just be able to link to it.
Otherwise we just have to verify that the "Getting Started" has enough information.

In this case, it seems to be that Fedora prefers to run everything through su (root).

https://docs.fedoraproject.org/en-US/quick-docs/getting-started-guide/#_root

So they prefer to have the user add "sudoers", and to run docker through "sudo".
Other distributions add the admin to sudo by default, and run docker without sudo.

https://docs.fedoraproject.org/en-US/quick-docs/performing-administration-tasks-using-sudo/

Or "libvirt", but anyway.

@medyagh medyagh added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Mar 16, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2021
@vmorris
Copy link

vmorris commented Jun 15, 2021

Hello - on Fedora 33 s390x, and I'm getting a similar failure trying to start.. any help?

[fedora@minikube1 ~]$ minikube start
😄  minikube v1.21.0 on Fedora 33 (s390x)
✨  Using the kvm2 driver based on user configuration

🚫  Exiting due to PR_KVM_USER_PERMISSION: libvirt group membership check failed:
error getting current user's GIDs: user: GroupIds requires cgo
💡  Suggestion: Ensure that you are a member of the appropriate libvirt group (remember to relogin for group changes to take effect!)
📘  Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
🍿  Related issues:
    ▪ https://github.com/kubernetes/minikube/issues/5617
    ▪ https://github.com/kubernetes/minikube/issues/10070

Grasping around, I've tried adding my user fedora to the libvirt kvm and qemu groups to no avail...

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 17, 2021
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Aug 4, 2021
@medyagh
Copy link
Member

medyagh commented Sep 15, 2021

@kmajcher-rh glad to see that helped.

I can confirm that when I removed user from the kvm group, minikube works.
So adding user to libvirt group did the trick for me.

What would you recommend to do to address this issue and help people with similar problem?

@vmorris does this help ? #10794 (comment)

@medyagh medyagh closed this as completed Sep 15, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kvm2-driver KVM2 driver related issues kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-solution-message Issues where where offering a solution for an error would be helpful os/linux priority/backlog Higher priority than priority/awaiting-more-evidence. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

7 participants