-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wireless does not automatically reconnect on resume on R4-rc1 #3151
Comments
Also, sorry for the journal-rant-style report above, it started as just notes to myself to remember what I was doing while working on this offline in the few spare bits of free time I had over a couple days. |
Original standard of pm-utils required two parameters, so plugging this script into systemd equivalent follow it. Now probably unneeded as pm-utils isn't used in fc25-based dom0 anymore.
This is the actual problem.
This is also a problem, probably #3142
It is there, but not installed on Fedora. Generally we do keep scripts for non-systemd VMs, to ease porting *-agent to such systems too.
Theoretically yes, but that would break compatibility - we try to maintain compatibility of dom0-VM interface, even between 3.x and 4.x. So it's possible to use VMs from 3.x on 4.x (including custom templates etc).
This is intentional. Those calls to qubesd are not meant to be called through qrexec (from outside of dom0). Only by dom0, through a separate socket. |
Automated announcement from builder-github The package
|
Automated announcement from builder-github The package
Or update dom0 via Qubes Manager. |
observed behavior: NetworkManager is not running in sys-net on resume, and blacklisted drivers (
/rw/config/suspend-module-blacklist
) need to be manually reloadedA reliable work-around is manually starting NetworkManager and reloading the blacklisted drivers, but that got annoying pretty quickly, so down the rabbit hole we go...
Sorry the following source references are from an installed machine instead of the repos, but I don't have the source code downloaded on this machine and didn't have internet access while debugging this.
in sys-net journal:
NetworkManager stopped in sys-net by
/usr/lib/qubes/prepare-suspend
:instrumenting that script, it indeed gets called with $1=suspend, but not called on resume.
Also, it gets invoked with uid=1000, which explains the
in the logs above.
running that script with arg "resume" manually starts NetworkManager, but does not bring up my interface (wlp0s0). reloading the modules afterwards makes it work again
So there appear to be two problems:
I wish I had a R3.2 box on hand to trace how this used to work
side note: the service stuff appears outdated, and qubes-core-netvm doesn't appear to exist anymore. can it be removed and simplified to just systemctl?
systemd suspend hook in dom0 (sleep.target.wants -> qubes-suspend.service, Before=sleep.target)
side note: the above invokes 52qubes-pause-vms with two arguments, when it only takes one:
dom0's
/usr/lib64/pm-utils/sleep.d/52qubes-pause-vms
:side note: there are no
/etc/qubes-rpc/{,policy/}internal.*
, I wonder if this may have hidden side-effectsin dom0's
/usr/lib/python3.5/site-packages/qubes/api/internal.py
:In sys-net, there exist all of:
sys-net's
/etc/qubes-rpc/qubes.SuspendPreAll
(the one that's called on suspend):but, there are no hooks there!
so it does nothing.
qubes.SuspendPre seems to do what's intended:
sys-net's
/etc/qubes-rpc/qubes.SuspendPre
(apparently not called?):SuspendPost[All] is a similar story, only difference being existence of
/etc/qubes/suspend-post.d/qvm-sync-clock.sh
and calling/usr/lib/qubes/prepare-suspend
with "resume"But if qubes.SuspendPre (not SuspendPreAll) isn't called, how is NetworkManager getting unloaded in the first place? Maybe through somewhere deeper in that vm.suspend() and vm.resume() call in
qubes/api/internal.py
above? Lets see...Instrumenting the rpc service scripts and suspending/resuming, I observe the following sequence:
so qubes.Suspend{Pre,Post}All are getting called as root, and qubes.Suspend{Pre,Post} (no All) are getting called as the user, but they are indeed all getting called.
From
/usr/lib/python3.5/site-packages/qubes/vm/qubesvm.py
:Sure enough...
vm.suspend():
causes:
and:
causes:
Invoking qubes.Prepare{Suspend,Resume} as root makes wireless work again on resume:
The contents of
/usr/lib/qubes/prepare-suspend
(invoked by qubes.Prepare{Suspend,Resume}) do appear to be intended to run as root (it expects modprobe to work, etc.). I would confirm whether they were invoked as root on a 3.2 machine, but I don't have one with me, and don't have source on hand right now either (and github's search feature apparently only indexes master which is now R4).final side-note: having both qubes.SuspendPre and qubes.SuspendPreAll in the first place seems kindof weird to me. Why should a VM care if it is the only one being suspended or if other VMs are being suspended as well? Would a better solution be to simplify the hooks altogther and just invoke invoke
/usr/lib/qubes/prepare-suspend {suspend,resume}
via/etc/qubes/suspend-{pre,post}.d/...
?@marmarek (or whoever): Feel free to commit the preferred fix without me, it might be some time before I can get around to submitting a PR.
The text was updated successfully, but these errors were encountered: