-
-
Notifications
You must be signed in to change notification settings - Fork 14.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ayatana-indicators: init messaging indicator, module, test #243476
Conversation
965ae03
to
915375d
Compare
Ehh, prolly good enough for review now. |
This pull request has been mentioned on NixOS Discourse. There might be relevant details there: https://discourse.nixos.org/t/prs-ready-for-review/3032/2651 |
915375d
to
879c21f
Compare
879c21f
to
bed9247
Compare
bed9247
to
2d8ab09
Compare
Result of 2 packages blacklisted:
3 packages built:
|
nixos/tests/ayatana-indicators.nix
Outdated
# Let all services do their startups, wait for potential post-launch crashes & crash-restart cycles to be done | ||
# Not sure if there's a better way of awaiting this | ||
machine.sleep(10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you shouldn't really need this and usually only wait for services specif to your test, and the test also won't crash if some random systemd unit not related to your test script dies so I don't really see a benefit
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment is referring to the individual indicator services. wait_for_unit
can't handle situations where a service launches fine, but crashes some time post-launch - it just looks at whatever the state is at the time. I've seen this happen with some of the indicators if wrapping or expected system services are missing, making the test give a false positive.
In addition, I can't figure out the best unit to await for the services being XDG autostarted, so more waiting there. Would be more than happy about better solution for this!
Thanks for the review. Will rethink stuff when I have time. |
2d8ab09
to
8fd521b
Compare
8fd521b
to
622342b
Compare
622342b
to
84605b1
Compare
I think this should be better now? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- module path fits the guidelines
- module tests succeed on x86_64-liinux
- options have appropriate types
- options have default
- options have example
- options have descriptions
- No unneeded package is added to
environment.systemPackages
-
meta.maintainers
is set - module documentation is declared in
meta.doc
- small internal module, not really needed. I consider ayatana-indicators.packages.description sufficient
Looks very good to me, just a couple of nits/questions.
|
||
# MATE relies on XDG autostart to bring up the indicators. | ||
# Not sure *when* XDG autostart fires them up, and awaiting pgrep success seems to misbehave? | ||
machine.sleep(10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flaky tests are very annoying on hydra, and this one is very much affected by the current workload of the host. Is it possible to await the mate user shell or a dummy XDG autostart target?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe? I'll have to look. Perhaps xdg-desktop-autostart.target
, but I'm not sure MATE uses SystemD for XDG autostart. Not sure if there's anything for MATE itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
machine.sleep(10) |
I think wait_until_succeeds
below is better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could instead wait_until_succeeds("pgrep [...]")
below, and I would like the idea of not relying on sleep()
s at all, but in my testing not sleep()
ing anywhere will blow past all starts-but-immediately-segfaults situations. Even checking coredumpctl
will report no errors, unless we sleep()
to let it catch up with handling the coredumps.
Edit: Here is an example where I intentionally break ayatana-indicator-messages
to not find its required schema and modify the VM test to not use any sleep()
s.
wait_until_succeeds("pgrep [...]")
succeeds, despite systemd working on handling a coredump of the XDG autostarted processsucceed("pkill [...]")
succeeds, presumably interrupting the coredump handling completelywait_for_unit("${service}")
succeeds, either while a coredump of the systemd service gets handled or because the service got auto-restarted by systemdfail("coredumpctl --json=short | grep 'ayatana-indicator'")
succeeds, presumably because there hasn't been any time to register any coredumps
I'd prefer to use sleep()
s between some steps to make sure the test steps are less likely to lie to me, unless you have any ideas for how to handle this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great exploration of the problem space. Even forcing coredumpctl to have a look wouldn't work unless we can guarantee a deficit would have caused a coredump by then. rg '\.sleep\(.*\)' -o | cut -d: -f2 | sort | uniq -c
reveals to me that 10 seconds is a pretty common wait time, so lets stick with it.
|
||
# Now check if all indicators were brought up successfully, and kill them for later | ||
'' + (runCommandPerIndicatorService (service: let serviceExec = builtins.replaceStrings [ "." ] [ "-" ] service; in '' | ||
machine.succeed("pgrep -f ${serviceExec}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to await this with timeout or retry n times? If so then the sleep(10)
above is sufficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
machine.succeed("pgrep -f ${serviceExec}") | |
machine.wait_until_succeeds("pgrep -f ${serviceExec}", timeout=100) # 900 default |
Description of changes
Working towards #99090.
Ayatana Indicators are a continuation of Canonical's Application Indicators. Lomiri makes use of them for its top bar system: without at least ayatana-indicator-session, you won't be able to exit the session as there will be no way to trigger the dialogue 😛.
This PR:
ayatana-indicator-messages
as a start.ayatana-indicators.target
bring up a list of desired indicators. I'll happily accept better solutions for this.WantedBy
, but we don't install (user) services on NixOS so one of the sides from this interaction needs to be get an explicitWants
(to the best of my knowledge)passthru.ayatana-indicators = [ "thingy" ]
attribute is expected to be present in all requested indicator package, listing the indicator services they provide. There's no individual package with multiple services that I'm aware of, but I see no reason to disallow this in practice.Some future indicators will come with optional Lomiri-specific support. I'll enable this when possible, and try to keep an eye on closure size increase so future non-Lomiri uses won't be too affected. See #262118 for a first push for that + initial closure considerations there.
The test is using MATE because it's one of the DEs with theoretical support for them, and because the target is supposed to depend on a graphical session, but its support is not really being used.
mate.mate-indicator-applet
would first have to be patched to use Ayatana indicators and find new-style ones, it would require (graphical?) panel configuration to load the indicator applet, and it doesn't seem to support/show some indicators (only 1/3 of the ones I tested showed up, but maybe I just picked some not-useful ones). Maybe it can be rewritten to use Lomiri later on, which has none of those issues, but it's somewhat-secondry to me when the entire rendering part of the interplay is really DE-specific.Things done
sandbox = true
set innix.conf
? (See Nix manual)nix-shell -p nixpkgs-review --run "nixpkgs-review rev HEAD"
. Note: all changes have to be committed, also see nixpkgs-review usage./result/bin/
)