Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VFS: busy inodes on changed media or resized disk pool1/ext4 #68

Closed
tobert opened this issue Oct 13, 2010 · 2 comments
Closed

VFS: busy inodes on changed media or resized disk pool1/ext4 #68

tobert opened this issue Oct 13, 2010 · 2 comments
Labels
Component: ZVOL ZFS Volumes
Milestone

Comments

@tobert
Copy link

tobert commented Oct 13, 2010

My test system is a Dell 1950 being loaned to me from another team with 16GB RAM, 24 1TB SAS drives (external JBOD), and two quad-core L5420's. The OS is Ubuntu 9.10 64-bit with the 2.6.31-14-server kernel (not what I'm targeting in production). The team loaning me the server loaded Ubuntu over Solaris 10. SPL is 0.5.1 and I'm switching between ZFS 0.5.1 and git head.

My benchmark ran out of space so I thought I'd try resizing the ext4 filesystem I have on one of the zvols and discovered that even though the "zfs set volsize=2T pool1/foobar" succeeded, the kernel did not pick up the change in size and therefore resize2fs will not resize the filesystem. This works fine when using LVM2, so I would assume there's no reason zvols shouldn't work.

dmesg shows this issue's title, "VFS: busy inodes on changed media or resized disk pool1/foobar".

I also tried running sfdisk -R and blockdev --getsize64 $device to no avail.

Going off clues on the ChangeLog mentioning the switch to check_disk_change() for compatibility, I went into modules/zfs/zvol.c and changed the call to check_disk_change() in zvol_update_volsize() to use revalidate_disk(bdev->bd_disk), rebuilt/reloaded, and now resizing seems to work. I tested resizing the zvol with heavy write I/O going to the filesystem (both xfs and ext4) and verified that the resize is recognized by the kernel and that xfs_grow and resize2fs behave as expected after the change to revalidate_disk.

Now the kernel shows "pool1/foobar: detected capacity change from 536870912000 to 1099511627776".

It may be best to put an ifdef or configure check in place to use revalidate_disk() where it's available.

    if (bdev)
        return EIO;

/* this isn't the correct way, but sufficient for testing the concept */
#ifdef revalidate_disk
    error = revalidate_disk(bdev->bd_disk);
#else
    error = check_disk_change(bdev);
#endif
    ASSERT3U(error, !=, 0);
@behlendorf
Copy link
Contributor

Interesting. Thanks for the bug report and digging in to the issue. I'll get a proper fix merged in to the latest code.

@behlendorf
Copy link
Contributor

I've dug in to your fix more carefully and I think it's a viable workaround for now but probably not the right long term fix. The check_disk_change() function should do what we want although we need to make sure we get the media_changed and revalidate_disk callbacks right. This fix is going to miss the next tag but we'll get it fixed.

behlendorf added a commit that referenced this issue Apr 29, 2011
This change fixes a kernel panic which would occur when resizing
a dataset which was not open.  The objset_t stored in the
zvol_state_t will be set to NULL when the block device is closed.
To avoid this issue we pass the correct objset_t as the third arg.

The code has also been updated to correctly notify the kernel
when the block device capacity changes.  For 2.6.28 and newer
kernels the capacity change will be immediately detected.  For
earlier kernels the capacity change will be detected when the
device is next opened.  This is a known limitation of older
kernels.

Online ext3 resize test case passes on 2.6.28+ kernels:
$ dd if=/dev/zero of=/tmp/zvol bs=1M count=1 seek=1023
$ zpool create tank /tmp/zvol
$ zfs create -V 500M tank/zd0
$ mkfs.ext3 /dev/zd0
$ mkdir /mnt/zd0
$ mount /dev/zd0 /mnt/zd0
$ df -h /mnt/zd0
$ zfs set volsize=800M tank/zd0
$ resize2fs /dev/zd0
$ df -h /mnt/zd0

Original-patch-by: Fajar A. Nugraha <[email protected]>
Issue #68
Issue #84
dajhorn referenced this issue in zfsonlinux/pkg-zfs Jan 6, 2012
Added the necessary build infrastructure for building packages
compatible with the Arch Linux distribution. As such, one can now run:

    $ ./configure
    $ make pkg     # Alternatively, one can run 'make arch' as well

on an Arch Linux machine to create two binary packages compatible with
the pacman package manager, one for the spl userland utilties and
another for the spl kernel modules. The new packages can then be
installed by running:

    # pacman -U $package.pkg.tar.xz

In addition, source-only packages suitable for an Arch Linux chroot
environment or remote builder can also be built using the 'sarch' make
rule.

NOTE: Since the source dist tarball is created on the fly from the head
of the build tree, it's MD5 hash signature will be continually influx.
As a result, the md5sum variable was intentionally omitted from the
PKGBUILD files, and the '--skipinteg' makepkg option is used. This may
or may not have any serious security implications, as the source tarball
is not being downloaded from an outside source.

Signed-off-by: Prakash Surya <[email protected]>
Signed-off-by: Brian Behlendorf <[email protected]>
Closes: #68
richardelling pushed a commit to richardelling/zfs that referenced this issue Oct 15, 2018
pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Aug 20, 2019
)

* Sort log spacemap tunables in alphabetical order

Beside the whole commit being a nit in reality it should
bring the diffs of the spa_log_spacemap.c source file
between ZoL and delphix/zfs to 0.

Reviewed-by: George Melikov <[email protected]>
Reviewed-by: Chris Dunlop <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Serapheim Dimitropoulos <[email protected]>
Closes openzfs#9143

* Introduce getting holds and listing bookmarks through ZCP

Consumers of ZFS Channel Programs can now list bookmarks,
and get holds from datasets. A minor-refactoring was also
applied to distinguish between user and system properties
in ZCP.

Reviewed-by: Paul Dagnelie <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Reviewed-by: Matt Ahrens <[email protected]>
Reviewed-by: Serapheim Dimitropoulos <[email protected]>
Ported-by: Serapheim Dimitropoulos <[email protected]>
Signed-off-by: Dan Kimmel <[email protected]>

OpenZFS-issue: https://illumos.org/issues/8862
Closes openzfs#7902
tonynguien pushed a commit to tonynguien/zfs that referenced this issue Dec 21, 2021
Disks can be added to an existing zettacache by restarting the agent
with additional `-c DEVICE` arguments.
rkojedzinszky pushed a commit to rkojedzinszky/zfs that referenced this issue Mar 7, 2023
There are times when end-users may wish to have
a fast and convenient method to get zpool guid
without having to use libzfs. This commit
exposes the zpool guid via kstats in similar
manner to the zpool state.

Reviewed-by: Alexander Motin <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Andrew Walker <[email protected]>
Closes openzfs#13466
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Component: ZVOL ZFS Volumes
Projects
None yet
Development

No branches or pull requests

2 participants