Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove zfs_vdev_elevator module option #9609

Merged
merged 1 commit into from
Nov 27, 2019

Conversation

behlendorf
Copy link
Contributor

Motivation and Context

Issue #8664, #9417. Due to issue #9417 we should strongly
consider retiring this feature in the 0.8.x series ahead of schedule.
This change has been planned for master, merging it now allows
us to easily backport it if we decide to do so.

Description

As described in commit f81d5ef the zfs_vdev_elevator module
option is being removed. Users who require this functionality
should update their systems to set the disk scheduler using a
udev rule.

How Has This Been Tested?

# echo "foo" >/sys/module/zfs/parameters/zfs_vdev_scheduler
# dmesg | tail -1
[87364.594793] The 'zfs_vdev_scheduler' module option is not supported.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Performance enhancement (non-breaking change which improves efficiency)
  • Code cleanup (non-breaking change which makes code smaller or more readable)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Documentation (a change to man pages or other documentation)

Checklist:

As described in commit f81d5ef the zfs_vdev_elevator module
option is being removed.  Users who require this functionality
should update their systems to set the disk scheduler using a
udev rule.

Signed-off-by: Brian Behlendorf <[email protected]>
@behlendorf behlendorf added the Status: Code Review Needed Ready for review and testing label Nov 21, 2019
Copy link
Member

@rlaager rlaager left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code-wise, this looks fine to me.

Do we have any idea what the trade-offs are, performance-wise, between the noop/none schedulers and whatever is the default (deadline, I think)? I thought the reason this was originally implemented was to avoid ZFS's I/O scheduler and the kernel I/O scheduler fighting each other (or at a minimum, the latter just wasting effort for no gain).

Put differently, you said "if users"... is this something that only some users need to customize, or is this something that most people would want? If it's the latter, then I think we should ship such a udev rule by default.

@behlendorf
Copy link
Contributor Author

Historically using the kernel's cfq scheduler and ZFS's internal scheduler didn't result in optimal performance. It was at this time that this option was implemented and the cfq scheduler was the default. However, that was a long time ago and these days the default scheduler is deadline or mq-deadline (In fact, as of the 5.3 kernel all non-multiqueue schedulers have been retired).

These days deadline, mq-deadline, noop, and none all perform well with ZFS, and users should not need to change the scheduler.

Regardless, I did look in to adding a udev rule we could ship. Doing so it a bit more complicated than you'd expect since there is no one right answer for what it should be set to. For example, on a CentOS 7 system running a 3.10 kernel with traditional HDDs the available options are:

$ cat /sys/block/sda/queue/scheduler 
noop [deadline] cfq 

While when running a 5.3 and using virtual vd device the options are entirely different.

$ cat /sys/block/vda/queue/scheduler 
[mq-deadline] kyber bfq none

I'm sure we could handle this, but given that the defaults are expected to work well I don't think this is something we want to take on any longer.

@behlendorf
Copy link
Contributor Author

@kpande what mechanism do you use to configure the disk scheduler today to avoid this?

@codecov
Copy link

codecov bot commented Nov 22, 2019

Codecov Report

Merging #9609 into master will increase coverage by 0.01%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #9609      +/-   ##
==========================================
+ Coverage   79.19%   79.21%   +0.01%     
==========================================
  Files         418      418              
  Lines      123531   123499      -32     
==========================================
- Hits        97828    97826       -2     
+ Misses      25703    25673      -30
Flag Coverage Δ
#kernel 79.91% <0%> (+0.06%) ⬆️
#user 66.59% <ø> (-0.32%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 0c46813...2ecf707. Read the comment docs.

@behlendorf
Copy link
Contributor Author

It sounds like even if we shipped a udev rule for this it wouldn't be of any use.

@PrivatePuffin
Copy link
Contributor

@kpande It might be awesome if you could make it an actually full blown bug report. Because that iscsi target timeout doesn't sound like it's expected/wanted behavior...

@behlendorf behlendorf added Status: Accepted Ready to integrate (reviewed, tested) and removed Status: Code Review Needed Ready for review and testing labels Nov 27, 2019
@behlendorf behlendorf merged commit 9e17e6f into openzfs:master Nov 27, 2019
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Dec 26, 2019
As described in commit f81d5ef the zfs_vdev_elevator module
option is being removed.  Users who require this functionality
should update their systems to set the disk scheduler using a
udev rule.

Reviewed-by: Richard Laager <[email protected]>
Reviewed-by: loli10K <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#8664
Closes openzfs#9417
Closes openzfs#9609
tonyhutter pushed a commit to tonyhutter/zfs that referenced this pull request Dec 27, 2019
As described in commit f81d5ef the zfs_vdev_elevator module
option is being removed.  Users who require this functionality
should update their systems to set the disk scheduler using a
udev rule.

Reviewed-by: Richard Laager <[email protected]>
Reviewed-by: loli10K <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue openzfs#8664
Closes openzfs#9417
Closes openzfs#9609
tonyhutter pushed a commit that referenced this pull request Jan 23, 2020
As described in commit f81d5ef the zfs_vdev_elevator module
option is being removed.  Users who require this functionality
should update their systems to set the disk scheduler using a
udev rule.

Reviewed-by: Richard Laager <[email protected]>
Reviewed-by: loli10K <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Issue #8664
Closes #9417
Closes #9609
@behlendorf behlendorf deleted the retire-zfs_vdev_scheduler branch April 19, 2021 19:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Accepted Ready to integrate (reviewed, tested)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants