Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High cpu usage when show_age_threshold = -1. #1163

Closed
Tracked by #1168
fwsmit opened this issue Apr 12, 2023 · 4 comments
Closed
Tracked by #1168

High cpu usage when show_age_threshold = -1. #1163

fwsmit opened this issue Apr 12, 2023 · 4 comments
Labels

Comments

@fwsmit
Copy link
Member

fwsmit commented Apr 12, 2023

Described by @bakkeby:

As per the comment in the dunst config file the show_age_threshold can be set to -1 to disable the display of the age of a message.

In queues_get_next_datachange if there are no messages that are timing out (i.e. they are permanent until the user interacts with them) then the function will return -1.

return wakeup_time != G_MAXINT64 ? wakeup_time : -1;

Now in the run function we do not take into account that the return value can be -1 indicating that there is no need to do any further updates.

dunst/src/dunst.c

Lines 95 to 115 in 464076d

if (active) {
gint64 timeout_at = queues_get_next_datachange(now);
// Previous computations may have taken time, update `now`
// This might mean that `timeout_at` is now before `now`, so
// we have to make sure that `sleep` is still positive.
now = time_monotonic_now();
gint64 sleep = timeout_at - now;
sleep = MAX(sleep, 1000); // Sleep at least 1ms
LOG_D("Sleeping for %li ms", sleep/1000);
if (sleep >= 0) {
if (reason == 0 || next_timeout < now || timeout_at < next_timeout) {
if (next_timeout != 0) {
g_source_remove(next_timeout_id);
}
next_timeout_id = g_timeout_add(sleep/1000, run, NULL);
next_timeout = timeout_at;
}
}
}

The solution can be as simple as bailing if that is the case:

         if (active) {
                 gint64 timeout_at = queues_get_next_datachange(now);
+                if (timeout_at == -1)
+                        return G_SOURCE_REMOVE;
                 // Previous computations may have taken time, update `now`
                 // This might mean that `timeout_at` is now before `now`, so
                 // we have to make sure that `sleep` is still positive.
                 now = time_monotonic_now();
                 gint64 sleep = timeout_at - now;
                 sleep = MAX(sleep, 1000); // Sleep at least 1ms
 
                 LOG_D("Sleeping for %li ms", sleep/1000);
 
                 if (sleep >= 0) {
                         if (reason == 0 || next_timeout < now || timeout_at < next_timeout) {
                                 if (next_timeout != 0) {
                                         g_source_remove(next_timeout_id);
                                 }
                                 next_timeout_id = g_timeout_add(sleep/1000, run, NULL);
                                 next_timeout = timeout_at;
                         }
                 }
         }

(there are other ways to refactor the code to the same effect of course)

Example test scenario is to:

  • set show_age_threshold to -1 in the dunstrc configuration file and
  • run top -p $(pidof dunst) to monitor CPU usage and
  • run notify-send -u critical hello

Expected behaviour without the change above:

  • CPU usage increases drastically while the notification is displayed
  • the notification may flicker

Expected behaviour with the change above:

  • CPU usage remains at 0.0% and
  • there are no unwanted side effects (i.e. new notifications come and go as normal, one can interact with the critical notification as normal)
@fwsmit
Copy link
Member Author

fwsmit commented Apr 12, 2023

@bakkeby Your fix seems to work. Could you submit it or something similar as a PR?

@bakkeby
Copy link
Contributor

bakkeby commented Apr 12, 2023

Sure, I'll raise one.

@fwsmit
Copy link
Member Author

fwsmit commented Apr 16, 2023

Fixed with #1164

@myme
Copy link

myme commented Apr 20, 2023

I've been consistently struggling with this issue for a while (and sorry to say without putting effort into solving it). I built from master after #1168 yesterday and have seen no trace of CPU spin ups since. Great work! 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants