Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

zfs-0.6.0-rc8.tar.gz fails tests on Debian 6.0.5 with 3.2.0 kernel #748

Closed
dpchrist opened this issue May 17, 2012 · 5 comments
Closed

zfs-0.6.0-rc8.tar.gz fails tests on Debian 6.0.5 with 3.2.0 kernel #748

dpchrist opened this issue May 17, 2012 · 5 comments
Labels
Type: Documentation Indicates a requested change to the documentation
Milestone

Comments

@dpchrist
Copy link

zfsonlinux:

I have a fresh Debian 6.0.5 amd64 (Squeeze) machine with a 3.2.0 kernel (linux-image-3.2.0-0.bpo.2-amd64 from debian-backports, needed for CPU/motherboard Intel HD 2000 graphics).

spl-0.6.0-rc8.tar.gz builds, installs, and tests okay.

zfs-0.6.0-rc8.tar.gz builds and installs okay, but fails testing (see console session below).

It looks like the ioctl interface has changed between the stock kernel (2.6.32?) and what I have (?).

Please advise.

TIA,

David

2012-05-16 20:52:13 dpchrist@i72600s ~/build/zfs-0.6.0-rc8
$ sudo /usr/libexec/zfs/zconfig.sh -c -v
Destroying
1 persistent zpool.cache ioctl: LOOP_SET_FD: Function not implemented
zpool-create.sh: Error 1 creating /tmp/zpool-vdev0 -> /dev/loop-control loopback
Fail (2)

2012-05-16 21:04:05 dpchrist@i72600s ~/build/zfs-0.6.0-rc8
$ cat /etc/debian_version
6.0.5

2012-05-16 21:04:26 dpchrist@i72600s ~/build/zfs-0.6.0-rc8
$ uname -a
Linux i72600s 3.2.0-0.bpo.2-amd64 #1 SMP Mon Apr 23 08:38:01 UTC 2012 x86_64 GNU/Linux

2012-05-16 21:05:05 dpchrist@i72600s ~/build/zfs-0.6.0-rc8
$ dpkg --get-selections | egrep "(spl|zfs)"
spl install
spl-modules install
spl-modules-devel install
zfs install
zfs-devel install
zfs-dracut install
zfs-modules install
zfs-modules-devel install
zfs-test install

@behlendorf
Copy link
Contributor

This can occur when you attempt to run the tests on top of a file system which does not support creating loop back devices. Is /tmp/ a normal file system or something like tmpfs?

@dpchrist
Copy link
Author

On 05/18/2012 10:05 AM, Brian Behlendorf wrote:

Is /tmp/ a normal file system or something like tmpfs?

Thanks for the response. :-)

I'm not sure if /tmp/ is unusual. Does this help?

2012-05-18 19:06:27 root@i72600s ~

mount

/dev/mapper/vg_system-lv_root on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext2 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc
(rw,noexec,nosuid,nodev)
nfsd on /proc/fs/nfsd type nfsd (rw)

I there a way I can get more information from zconfig.sh, or whatever it
is calling (e.g. debug or log file)?

David

@behlendorf
Copy link
Contributor

If you run zconfig.sh as following you'll get a full trace of the script. However, I wouldn't be too worried about this failure since it has more to do with your environment than zfs itself.

sudo bash -x /usr/libexec/zfs/zconfig.sh -c -v

@dpchrist
Copy link
Author

On 05/21/2012 09:54 AM, Brian Behlendorf wrote:

If you run zconfig.sh as following you'll get a full trace of the script. However, I wouldn't be too worried about this failure since it has more to do with your environment than zfs itself.

sudo bash -x /usr/libexec/zfs/zconfig.sh -c -v

See output, below.

So, if I'm not supposed to be worried about this failure, will
zfsonlinux work on this system?

David

2012-05-22 22:25:58 root@i72600s ~

bash -x /usr/libexec/zfs/zconfig.sh -c -v

++ dirname /usr/libexec/zfs/zconfig.sh

  • basedir=/usr/libexec/zfs
  • SCRIPT_COMMON=common.sh
  • '[' -f /usr/libexec/zfs/common.sh ']'
  • . /usr/libexec/zfs/common.sh
    +++ dirname /usr/libexec/zfs/zconfig.sh
    ++ basedir=/usr/libexec/zfs
    ++ SCRIPT_CONFIG=zfs-script-config.sh
    ++ '[' -f /usr/libexec/zfs/../zfs-script-config.sh ']'
    ++ KERNEL_MODULES=(zlib_deflate zlib_inflate)
    ++ MODULES=(spl splat zavl znvpair zunicode zcommon zfs)
    ++ PROG=''
    ++ CLEANUP=
    ++ VERBOSE=
    ++ VERBOSE_FLAG=
    ++ FORCE=
    ++ FORCE_FLAG=
    ++ DUMP_LOG=
    ++ ERROR=
    ++ RAID0S=()
    ++ RAID10S=()
    ++ RAIDZS=()
    ++ RAIDZ2S=()
    ++ TESTS_RUN='*'
    ++ TESTS_SKIP=
    ++ prefix=/
    ++ exec_prefix=/
    ++ libexecdir=/usr/libexec
    ++ pkglibexecdir=/usr/libexec/zfs
    ++ bindir=//bin
    ++ sbindir=//sbin
    ++ udevdir=/lib/udev
    ++ udevruledir=/lib/udev/rules.d
    ++ sysconfdir=/etc
    ++ ETCDIR=/etc
    ++ DEVDIR=/dev/disk/zpool
    ++ ZPOOLDIR=/usr/libexec/zfs/zpool-config
    ++ ZPIOSDIR=/usr/libexec/zfs/zpios-test
    ++ ZPIOSPROFILEDIR=/usr/libexec/zfs/zpios-profile
    ++ ZDB=//sbin/zdb
    ++ ZFS=//sbin/zfs
    ++ ZINJECT=//sbin/zinject
    ++ ZPOOL=//sbin/zpool
    ++ ZPOOL_ID=//bin/zpool_id
    ++ ZTEST=//sbin/ztest
    ++ ZPIOS=//sbin/zpios
    ++ COMMON_SH=/usr/libexec/zfs/common.sh
    ++ ZFS_SH=/usr/libexec/zfs/zfs.sh
    ++ ZPOOL_CREATE_SH=/usr/libexec/zfs/zpool-create.sh
    ++ ZPIOS_SH=/usr/libexec/zfs/zpios.sh
    ++ ZPIOS_SURVEY_SH=/usr/libexec/zfs/zpios-survey.sh
    ++ LDMOD=/sbin/modprobe
    ++ LSMOD=/sbin/lsmod
    ++ RMMOD=/sbin/rmmod
    ++ INFOMOD=/sbin/modinfo
    ++ LOSETUP=/sbin/losetup
    ++ MDADM=/sbin/mdadm
    ++ PARTED=/sbin/parted
    ++ BLOCKDEV=/sbin/blockdev
    ++ LSSCSI=/usr/bin/lsscsi
    ++ SCSIRESCAN=/usr/bin/scsi-rescan
    ++ SYSCTL=/sbin/sysctl
    ++ UDEVADM=/sbin/udevadm
    ++ AWK=/usr/bin/awk
    ++ COLOR_BLACK='\033[0;30m'
    ++ COLOR_DK_GRAY='\033[1;30m'
    ++ COLOR_BLUE='\033[0;34m'
    ++ COLOR_LT_BLUE='\033[1;34m'
    ++ COLOR_GREEN='\033[0;32m'
    ++ COLOR_LT_GREEN='\033[1;32m'
    ++ COLOR_CYAN='\033[0;36m'
    ++ COLOR_LT_CYAN='\033[1;36m'
    ++ COLOR_RED='\033[0;31m'
    ++ COLOR_LT_RED='\033[1;31m'
    ++ COLOR_PURPLE='\033[0;35m'
    ++ COLOR_LT_PURPLE='\033[1;35m'
    ++ COLOR_BROWN='\033[0;33m'
    ++ COLOR_YELLOW='\033[1;33m'
    ++ COLOR_LT_GRAY='\033[0;37m'
    ++ COLOR_WHITE='\033[1;37m'
    ++ COLOR_RESET='\033[0m'
  • PROG=zconfig.sh
  • getopts 'hvct:s:?' OPTION
  • case $OPTION in
  • CLEANUP=1
  • getopts 'hvct:s:?' OPTION
  • case $OPTION in
  • VERBOSE=1
  • getopts 'hvct:s:?' OPTION
    ++ id -u
  • '[' 0 '!=' 0 ']'
  • init
  • local RULE=/lib/udev/rules.d/90-zfs.rules
  • test -e /lib/udev/rules.d/90-zfs.rules
  • trap 'mv /lib/udev/rules.d/90-zfs.rules.disabled
    /lib/udev/rules.d/90-zfs.rules; exit 0' INT TERM EXIT
  • mv /lib/udev/rules.d/90-zfs.rules /lib/udev/rules.d/90-zfs.rules.disabled
  • '[' 1 ']'
  • /usr/libexec/zfs/zfs.sh -u
  • cleanup_md_devices
    ++ grep -v p
    ++ ls '/dev/md*'
  • destroy_md_devices ''
  • local MDDEVICES=
  • msg 'Destroying '
  • '[' 1 ']'
  • echo 'Destroying '
    Destroying
  • return 0
  • udev_trigger
  • '[' -f /sbin/udevadm ']'
  • /sbin/udevadm trigger --action=change --subsystem-match=block
  • /sbin/udevadm settle
  • cleanup_loop_devices
    ++ mktemp
  • local TMP_FILE=/tmp/tmp.OXYJYkxXyk
  • /sbin/losetup -a
  • tr -d '()'
  • /usr/bin/awk -F: -v losetup=/sbin/losetup '/zpool/ { system("losetup
    -d "$1) }' /tmp/tmp.OXYJYkxXyk
  • /usr/bin/awk '-F ' '/zpool/ { system("rm -f "$3) }' /tmp/tmp.OXYJYkxXyk
  • rm -f /tmp/tmp.OXYJYkxXyk
  • rm -f '/tmp/zpool.cache.*'
  • SCSI_DEBUG=0
  • /sbin/modinfo scsi_debug
  • SCSI_DEBUG=1
  • HAVE_LSSCSI=0
  • test -f /usr/bin/lsscsi
  • HAVE_LSSCSI=1
  • '[' 1 -eq 0 ']'
  • '[' 1 -eq 0 ']'
  • run_test 1 'persistent zpool.cache'
  • local TEST_NUM=1
  • local 'TEST_NAME=persistent zpool.cache'
  • '[' '' = '' ']'
  • run_one_test 1 'persistent zpool.cache'
  • local TEST_NUM=1
  • local 'TEST_NAME=persistent zpool.cache'
  • printf '%-4d %-34s ' 1 'persistent zpool.cache'
    1 persistent zpool.cache + test_1
  • local POOL_NAME=test1
    ++ mktemp
  • local TMP_FILE1=/tmp/tmp.lRvgSzx1CR
    ++ mktemp
  • local TMP_FILE2=/tmp/tmp.tEhJRXY3qO
    ++ mktemp -p /tmp zpool.cache.XXXXXXXX
  • local TMP_CACHE=/tmp/zpool.cache.GWd8BIbp
  • /usr/libexec/zfs/zfs.sh zfs=spa_config_path=/tmp/zpool.cache.GWd8BIbp
  • /usr/libexec/zfs/zpool-create.sh -p test1 -c lo-raidz2
    ioctl: LOOP_SET_FD: Function not implemented
    zpool-create.sh: Error 1 creating /tmp/zpool-vdev0 -> /dev/loop-control
    loopback
  • fail 2
  • echo -e '\033[0;31mFail\033[0m (2)'
    Fail (2)
  • exit 2
  • mv /lib/udev/rules.d/90-zfs.rules.disabled /lib/udev/rules.d/90-zfs.rules
  • exit 0

@behlendorf
Copy link
Contributor

Yes, it should work fine.

pcd1193182 pushed a commit to pcd1193182/zfs that referenced this issue Sep 26, 2023
Bumps [url](https://github.com/servo/rust-url) from 2.3.0 to 2.3.1.
- [Release notes](https://github.com/servo/rust-url/releases)
- [Commits](servo/rust-url@v2.3.0...v2.3.1)

---
updated-dependencies:
- dependency-name: url
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <[email protected]>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Documentation Indicates a requested change to the documentation
Projects
None yet
Development

No branches or pull requests

2 participants