Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

systemd: add weekly and monthly scrub timers #12193

Merged
merged 2 commits into from
Dec 16, 2021

Conversation

gyakovlev
Copy link
Contributor

@gyakovlev gyakovlev commented Jun 4, 2021

Motivation and Context

Provide 2 basic timers that leverage systemd scheduling tech.

Description

provide 3 extra files for use with systemd:

timers can be enabled as follows:

systemctl enable [email protected] --now
systemctl enable [email protected] --now

Each timer will pull in zfs-scrub@${poolname}.service, which is not
schedule-specific, but the zfs-scrub will receive pool argument from timer unit.

Configuration provided is generic and simple.

Users can tweak parameters by using systemctl --edit <unitname>

How Has This Been Tested?

 # systemctl enable [email protected] --now
Created symlink /etc/systemd/system/timers.target.wants/[email protected] → /usr/lib/systemd/system/[email protected].

 # systemctl status [email protected][email protected] - Weekly zpool scrub timer for zroot
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled)
     Active: active (waiting) since Thu 2021-06-03 18:46:18 PDT; 11s ago
    Trigger: Mon 2021-06-07 00:00:00 PDT; 3 days left
   Triggers: ● [email protected]
       Docs: man:zpool-scrub(8)

Jun 03 18:46:18 cerberus systemd[1]: Started Weekly zpool scrub timer for zroot.

# systemctl status [email protected][email protected] - zpool scrub on zroot
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; static)
     Active: inactive (dead)
TriggeredBy: ● [email protected]
       Docs: man:zpool-scrub(8)

in example above scrub for zroot pool will be triggered weekly.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
  • Documentation (a change to man pages or other documentation)

Checklist:

gentoo-bot pushed a commit to gentoo/gentoo that referenced this pull request Jun 4, 2021
to use with systemd
Pr: openzfs/zfs#12193
Signed-off-by: Georgy Yakovlev <[email protected]>
@behlendorf behlendorf added Component: Systemd Systemd integration Status: Code Review Needed Ready for review and testing labels Jun 4, 2021
Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat! My only concern is that we should document how to enable these timers. One option would be to add a few lines to the zpool-scrub.8 man page. Maybe something like, "On machines using systemd, scheduled scrub can be enabled by...".

I believe you also need to add the new units to the 50-zfs.present.in to set the default behavior.

https://www.freedesktop.org/software/systemd/man/systemd.preset.html

@gyakovlev
Copy link
Contributor Author

sure, will update manpage. thanks for suggestion.

as for preset, we need to know pool names, hence we can't enable in via systemctl preset

it's not "scrub everything", those timers are opt-in on per-pool basis.

so to enable a weekly timer for rpool a user will run

systemctl enable zfs-scrub-weekly@rpool.timer --now

@behlendorf
Copy link
Contributor

I see, well I think we'll want to make sure this functionality is disabled by default. Which since it requires a pool name seems like it will be the case.

@gyakovlev
Copy link
Contributor Author

yes it's disabled by default and completely opt-in

added short description how to use timers:
image

Copy link
Contributor

@behlendorf behlendorf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Thanks for adding this to the docs.

@behlendorf behlendorf requested a review from rlaager June 11, 2021 15:30
Copy link
Member

@rlaager rlaager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On Debian, we have this script:

#!/bin/sh -eu

# Scrub all healthy pools that are not already scrubbing.
zpool list -H -o health,name 2>&1 | \
	awk '$1 ~ /^ONLINE/ { print $2; }' | \
while read pool
do
	if ! zpool status "$pool" | grep -q "scrub in progress"
	then
		# Ignore errors (i.e. HDD pools),
		# and continue with scrubing other pools.
		zpool scrub "$pool" || true
	fi
done

(I'm not sure what that "HDD pools" comment is about. That's new and makes no sense to me.)

which runs from cron.d like this:

# Scrub the second Sunday of every month.
24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi

Can you comment on why you chose a per-pool approach rather than a "scrub all the pools" approach? (I'm not saying it's wrong, and I can likely argue a case either way. I'd just like to hear your thinking, if you don't mind.)

Specific advantages of "scrub all pools" would be that it can be enabled by default and it would provide a relatively clean upgrade path for Debian, which is already taking that approach (and does for mdadm too, at least from cron, FWIW).

I'm not sure if we need to offer both weekly and monthly scrubs. A weekly scrub seems overly frequent to me. Do you use weekly scrubs yourself?

etc/systemd/system/[email protected] Outdated Show resolved Hide resolved
@fredcooke
Copy link

Hi all, how does this interact with default scrub scheduling? I was watching a YT video tonight and it started to intermittently freeze. I knew it was ZFS related because the always-idle cold-storage zpool HDDs in front of me were making noise and flashing lights, so I checked and both of my two pools were scrubbing. I don't mind an auto scrub, but before 2am is unacceptable to me. The "enable" commands here don't appear to specify a time. What's the default behaviour, timing wise, and how can it be controlled to be acceptable to the end user's particular scenario?

@gyakovlev
Copy link
Contributor Author

Hi all, how does this interact with default scrub scheduling? I was watching a YT video tonight and it started to intermittently freeze. I knew it was ZFS related because the always-idle cold-storage zpool HDDs in front of me were making noise and flashing lights, so I checked and both of my two pools were scrubbing. I don't mind an auto scrub, but before 2am is unacceptable to me. The "enable" commands here don't appear to specify a time. What's the default behaviour, timing wise, and how can it be controlled to be acceptable to the end user's particular scenario?

there's no default scheduling to interact with. some downstream addition probably does it for you, similar to one mentioned in #12193 (review). in that case you don't need those timers, probably.
those timers are disabled by default and unless you explicitly opt-in.

timer scheduling deferred to systemd. check man systemd.timer and man systemd.time, there's a lot of options and knobs.
to tune timing you can do systemctl --edit zfs-scrub-<monthly>@<poolname>.timer and modify OnCalendar and other scheduling parameters, per timer.
Defaults are intentionally simple OnCalendar=weekly and OnCalendar=monthly

@gyakovlev
Copy link
Contributor Author

On Debian, we have this script:

#!/bin/sh -eu

# Scrub all healthy pools that are not already scrubbing.
zpool list -H -o health,name 2>&1 | \
	awk '$1 ~ /^ONLINE/ { print $2; }' | \
while read pool
do
	if ! zpool status "$pool" | grep -q "scrub in progress"
	then
		# Ignore errors (i.e. HDD pools),
		# and continue with scrubing other pools.
		zpool scrub "$pool" || true
	fi
done

(I'm not sure what that "HDD pools" comment is about. That's new and makes no sense to me.)

which runs from cron.d like this:

# Scrub the second Sunday of every month.
24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi

Can you comment on why you chose a per-pool approach rather than a "scrub all the pools" approach? (I'm not saying it's wrong, and I can likely argue a case either way. I'd just like to hear your thinking, if you don't mind.)

I, personally, dislike some of debian/ubuntu automagic.
those timers were not meant to replace this or similar cronjob.
they are to provide flexible opt-in solution to downstreams who do not provide such scheduling.
Many times I've been in situation when I boot debian/ubuntu installer iso, install zfs pakages to do some maintenance, and the installation triggers will attempt importing the pools immediately, often failing, because it does not use '-N' or '-R', thus attempting to overmount / or other mountpoints, depending on configuration and properties..
And if time is right, it will attempt scrub as well.
That's against factor of least surprise, at least for me.

Specific advantages of "scrub all pools" would be that it can be enabled by default and it would provide a relatively clean upgrade path for Debian, which is already taking that approach (and does for mdadm too, at least from cron, FWIW).

this can be different PR. this one provides per pool units. and timings can be edited separately.
for example one can avoid scrub overlap by careful timing.

I'm not sure if we need to offer both weekly and monthly scrubs. A weekly scrub seems overly frequent to me. Do you use weekly scrubs yourself?

I do, for root/boot pool, which changes VERY often, and is kinda small, no more than 256G usually.
I agree that for big data pools weekly may be excessive.

But again, this PR is about providing reasonably flexible timers, not a solution that fits all and enabled by default.

@rlaager
Copy link
Member

rlaager commented Jun 12, 2021

I'm not sure why GitHub won't let me reply directly to your comment... Given that you tested the .in thing, it seems fine to keep it simple (i.e. as you already have it).

@fredcooke
Copy link

Thanks @gyakovlev - you were spot on, from another ticket I found an older version of this:

root@mako:~# cat /etc/cron.d/zfsutils-linux
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# TRIM the first Sunday of every month.
24 0 1-7 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/trim ]; then /usr/lib/zfs-linux/trim; fi

# Scrub the second Sunday of every month.
24 0 8-14 * * root if [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ]; then /usr/lib/zfs-linux/scrub; fi

Which I'll edit to suit myself, your approach sounds good, nice and flexible but simple/easy if desired.

@gyakovlev
Copy link
Contributor Author

gyakovlev commented Jun 14, 2021

@rlaager anything else to do here to get approval? I still see the red icon requesting changes =)
I left .in files for simplicity as agreed. and yes, I've tested without .in suffix, it will require more changes.

Copy link
Member

@rlaager rlaager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, apparently my review didn’t submit?! It was showing as pending. Submitting now.

etc/systemd/system/[email protected] Outdated Show resolved Hide resolved
@lnicola
Copy link
Contributor

lnicola commented Jun 28, 2021

I'd love to see this merged, and it would obsolete my version (spoiler: I call zpool scrub -s in ExecStartPre) and the corresponding AUR package.

@gyakovlev
Copy link
Contributor Author

finally found time to implement things suggested in comments. thank you very much for feedback and ideas.

unit now properly waits for scrub to exit ( via -w arg to scrub )

[email protected] - zpool scrub on zroot
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; static)
     Active: active (running) since Fri 2021-07-02 20:53:55 PDT; 2min 43s ago
TriggeredBy: ● [email protected]
       Docs: man:zpool-scrub(8)
   Main PID: 284881 (sh)
      Tasks: 2 (limit: 72827)
     Memory: 1.2M
        CPU: 24ms
     CGroup: /system.slice/system-zfs\x2dscrub.slice/[email protected]
             ├─284881 /bin/sh -c  if { /sbin/zpool status zroot | grep -q "scrub in progress" ;}; then echo "scrub in progress, exiting"; else /sbin/zpool scrub -w zroot; fi
             └─284884 /sbin/zpool scrub -w zroot

Jul 02 20:53:55 cerberus systemd[1]: Started zpool scrub on zroot.

and exits and becomes inactive once scrub is finished

[email protected] - zpool scrub on zroot
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; static)
     Active: inactive (dead) since Fri 2021-07-02 20:57:46 PDT; 4s ago
TriggeredBy: ● [email protected]
       Docs: man:zpool-scrub(8)
    Process: 284881 ExecStart=/bin/sh -c  if { /sbin/zpool status zroot | grep -q "scrub in progress" ;}; then echo "scrub in progress, exiting"; else /sbin/zpool scrub -w zroot; fi (code=exited, status=0/SUCCESS)
   Main PID: 284881 (code=exited, status=0/SUCCESS)
        CPU: 26ms

Jul 02 20:53:55 cerberus systemd[1]: Started zpool scrub on zroot.
Jul 02 20:57:46 cerberus systemd[1]: [email protected]: Deactivated successfully.

I haven't tested shutting down with DefaultDependencies=no, but I think it will do what we want, zfs-mount and import units already set it as well.

I left multiple commits for easier review, but ofc I can squash if you wish.

@gyakovlev
Copy link
Contributor Author

forgot to mention.
I used multiline sh to print a helpful message if scrub is running.

@gyakovlev
Copy link
Contributor Author

@rlaager I was very busy but finally found time to finish it.
implemented last variant with wait and grep output suggested by you in #12193 (comment)

I haven't tested the shutdown/stop while scrub is running though, will give it a test.

@rlaager
Copy link
Member

rlaager commented Nov 6, 2021

@rlaager I was very busy but finally found time to finish it. implemented last variant with wait and grep output suggested by you in #12193 (comment)

I haven't tested the shutdown/stop while scrub is running though, will give it a test.

Do we want an explicit stop of the scrub service to stop or pause the scrub?

If stop, I suggest testing like this:

  1. In the scrub service: ExecStop=@sbindir@/zpool scrub -s %i
  2. Start the service. Stop the service. Make sure that stops the scrub.
  3. Start the service. Reboot. When the system comes back up, the scrub should continue. This indicates the scrub was not stopped on shutdown.

If pause, I suggest testing like this:

  1. Grab the output of systemctl show on the scrub service and save that to a file.
  2. In the scrub service: ExecStop=@sbindir@/zpool scrub -p %i and remove DefaultDependencies=no.
  3. Grab the output of systemctl show on the scrub service, save that to a file, and diff it against the file from step 1. Make sure the effect of removing DefaultDependencies=no was sane.
  4. Start the service. Stop the service. Make sure that pauses the scrub.
  5. Start the service. Reboot. When the system comes back up, the scrub should continue.

My inclination is that stopping the service should pause the scrub. This would allow someone to stop the service because they need to stop the disk I/O without losing the progress of their scrub.

@gyakovlev
Copy link
Contributor Author

stumbled on this with ExecStop=@sbindir@/zpool scrub -p %i

× [email protected] - zpool scrub on zroot
     Loaded: loaded (/usr/lib/systemd/system/[email protected]; static)
     Active: failed (Result: exit-code) since Sat 2021-11-06 01:58:14 PDT; 2s ago
       Docs: man:zpool-scrub(8)
    Process: 2189725 ExecStart=/bin/sh -c  if /sbin/zpool status zroot | grep "scrub in progress"; then exec /sbin>
    Process: 2660271 ExecStop=/sbin/zpool scrub -p zroot (code=exited, status=1/FAILURE)
   Main PID: 2189725 (code=exited, status=0/SUCCESS)
        CPU: 30ms

Nov 06 01:57:31 cerberus systemd[1]: Started zpool scrub on zroot.
Nov 06 01:58:14 cerberus zpool[2660271]: cannot pause scrubbing zroot: there is no active scrub
Nov 06 01:58:14 cerberus systemd[1]: [email protected]: Control process exited, code=exited, status=1/FAILURE
Nov 06 01:58:14 cerberus systemd[1]: [email protected]: Failed with result 'exit-code'.

this is status after scrub finished.
systemd runs ExecStop after wait, which does not really work as is for us.

so looks like in needs bash wrapper similar to ExecStart or something else, I'll read up on this.

@gyakovlev
Copy link
Contributor Author

ExecStop= and ExecStopPost= are executed during a service restart operation.

ExecStop=/bin/sh -c '\
if /sbin/zpool status %i | grep "scrub in progress"; then\
exec /sbin/zpool scrub -p %i; fi'

should do the trick I think.

@gyakovlev
Copy link
Contributor Author

now it looks like this:

systemctl start
..
Nov 06 02:07:08 cerberus systemd[1]: Started zpool scrub on zroot.

systemctl stop
...
Nov 06 02:07:29 cerberus systemd[1]: Stopping zpool scrub on zroot...
Nov 06 02:07:29 cerberus sh[2799550]:   scan: scrub in progress since Sat Nov  6 02:07:09 2021
Nov 06 02:07:29 cerberus systemd[1]: [email protected]: Deactivated successfully.
Nov 06 02:07:29 cerberus systemd[1]: Stopped zpool scrub on zroot.

systemctl start (restarting paused scrub)
...
Nov 06 02:08:15 cerberus systemd[1]: Started zpool scrub on zroot.
Nov 06 02:09:07 cerberus systemd[1]: [email protected]: Deactivated successfully.

^ finished scrub fine

I'll test the reboot scenario soon-ish too.

@rlaager
Copy link
Member

rlaager commented Nov 8, 2021

I'm not sure if we need to actually check that the scrub is running before trying to pause. We could just pause and ignore errors:
ExecStop=/bin/sh -c '@sbindir@/zpool scrub -p %i || true'

Checking has a race condition: if the scrub stops in between the check and the scrub, the pause will still fail. Just eating errors avoids that, but has the downside of eating real errors if pausing the scrub fails for some reason. Given that pausing the scrub is extremely unlikely to fail in any other way, I think the right trade-off is to just ignore the errors. But I don't feel strongly about this.

@gyakovlev
Copy link
Contributor Author

no strong opinion on this too, so I implemented suggestion, race cond argument is true.
still going to test reboots, just don't have spare HW for that right now.

@behlendorf
Copy link
Contributor

Are there any remaining concerns with this PR? Or is mainly waiting on some testing?

@gyakovlev
Copy link
Contributor Author

I tested, reboots look fine.
here's log when service started and reboot issued.

-- Boot d529e38325e242879a84eca461dd5d8a --
...
Dec 01 03:53:03 cerberus systemd[1]: Started zpool scrub on zroot.
Dec 01 03:53:14 cerberus systemd[1]: Stopping zpool scrub on zroot...
Dec 01 03:53:15 cerberus systemd[1]: [email protected]: Deactivated successfully.
Dec 01 03:53:15 cerberus systemd[1]: Stopped zpool scrub on zroot.

zpool state after reboot:

zpool status zroot
  pool: zroot
 state: ONLINE
  scan: scrub paused since Wed Dec  1 03:55:50 2021
        scrub started on Wed Dec  1 03:53:03 2021
        0B scanned, 0B issued, 59.2G total
        0B repaired, 0.00% done
config:

        NAME                                                      STATE     READ WRITE CKSUM
        zroot                                                     ONLINE       0     0     0
          mirror-0                                                ONLINE       0     0     0
            nvme-Samsung_SSD_960_PRO_512GB_zzz-part4  ONLINE       0     0     0
            nvme-Samsung_SSD_960_PRO_512GB_xxx-part4  ONLINE       0     0     0

errors: No known data errors

starting service after reboot resumes scrub as intended.

so I think it's ready to merge, huge thanks to @rlaager for help and suggestions.

also this patch was shipped in gentoo for quite some time and no bugs has been reported so far. users are happy.

the only thing that may need attention is, when service is stopped after scrub is finished, systemd logs this:

Dec 01 04:02:30 cerberus sh[1202199]: cannot pause scrubbing zroot: there is no active scrub

it's cosmetic, but somewhat misleading.

@rlaager
Copy link
Member

rlaager commented Dec 1, 2021

when service is stopped after scrub is finished, systemd logs this

Currently, you have this:

ExecStop=/bin/sh -c '@sbindir@/zpool scrub -p %i || true'

You could address that error with:

ExecStop=/bin/sh -c '@sbindir@/zpool scrub -p %i 2>/dev/null || true'

Alternatively, if you're not going to suppress the error, you should be able to just prefix the command with a dash to ignore the exit status, eliminating the sh and true:

ExecStop=-@sbindir@/zpool scrub -p %i

@gyakovlev
Copy link
Contributor Author

ExecStop=-@sbindir@/zpool scrub -p %i 2>/dev/null

gives:

zpool[2820464]: cannot open '2>/dev/null': invalid character '>' in pool name

so I opted in for this:
ExecStop=-/bin/sh -c 'exec @sbindir@/zpool scrub -p %i 2>/dev/null || true'

this gives clean output if scrub completes:

Dec 15 18:15:23 cerberus systemd[1]: Started zpool scrub on zroot.
Dec 15 18:17:37 cerberus systemd[1]: [email protected]: Deactivated successfully.

and
still gives a message if it actually stopped mid-run

Dec 15 18:20:15 cerberus systemd[1]: Stopping zpool scrub on zroot...
Dec 15 18:20:15 cerberus systemd[1]: [email protected]: Deactivated successfully.

and as you said I agree that catching errors for pause can be omitted, because it's already safe to reboot with scrub running anyway.

@gyakovlev
Copy link
Contributor Author

rebased

Copy link
Member

@rlaager rlaager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t think exec … || true is going to work. The exec ends the shell process. If you need to eat failure status, just remove the exec.

Otherwise, this seems good to me.

timers can be enabled as follows:

systemctl enable [email protected] --now
systemctl enable [email protected] --now

Each timer will pull in zfs-scrub@${poolname}.service, which is not
schedule-specific.

Signed-off-by: Georgy Yakovlev <[email protected]>
@gyakovlev
Copy link
Contributor Author

done, removed exec

gentoo-bot pushed a commit to gentoo/gentoo that referenced this pull request Dec 16, 2021
@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Dec 16, 2021
@behlendorf behlendorf merged commit 2300621 into openzfs:master Dec 16, 2021
@behlendorf
Copy link
Contributor

@gyakovlev @rlaager thanks for working to get this one over the finish line. Merged.

tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Feb 10, 2022
Timers can be enabled as follows:

systemctl enable [email protected] --now
systemctl enable [email protected] --now

Each timer will pull in zfs-scrub@${poolname}.service, which is not
schedule-specific.

Added PERIODIC SCRUB section to zpool-scrub.8.

Reviewed-by: Richard Laager <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Georgy Yakovlev <[email protected]>
Closes openzfs#12193
nicman23 pushed a commit to nicman23/zfs that referenced this pull request Aug 22, 2022
Timers can be enabled as follows:

systemctl enable [email protected] --now
systemctl enable [email protected] --now

Each timer will pull in zfs-scrub@${poolname}.service, which is not
schedule-specific.

Added PERIODIC SCRUB section to zpool-scrub.8.

Reviewed-by: Richard Laager <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Georgy Yakovlev <[email protected]>
Closes openzfs#12193
nicman23 pushed a commit to nicman23/zfs that referenced this pull request Aug 22, 2022
Timers can be enabled as follows:

systemctl enable [email protected] --now
systemctl enable [email protected] --now

Each timer will pull in zfs-scrub@${poolname}.service, which is not
schedule-specific.

Added PERIODIC SCRUB section to zpool-scrub.8.

Reviewed-by: Richard Laager <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Georgy Yakovlev <[email protected]>
Closes openzfs#12193
mittyorz added a commit to mittyorz/infra-zfs that referenced this pull request Nov 27, 2022
infra/ops#506
1st Sat schedule can overlap with zfs-dump, which can cause excessive
disk I/O

remove [email protected] and changed to use the unit file introduced in
OpenZFS 2.1.3
openzfs/zfs#12193
gentoo-repo-qa-bot pushed a commit to gentoo-mirror/linux-be that referenced this pull request Jul 2, 2023
to use with systemd
Pr: openzfs/zfs#12193
Signed-off-by: Georgy Yakovlev <[email protected]>
gentoo-repo-qa-bot pushed a commit to gentoo-mirror/linux-be that referenced this pull request Jul 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: Systemd Systemd integration Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants