Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CI/CD Failure: QM Enabled and not Active #602

Closed
dougsland opened this issue Oct 5, 2024 · 1 comment
Closed

CI/CD Failure: QM Enabled and not Active #602

dougsland opened this issue Oct 5, 2024 · 1 comment
Labels

Comments

@dougsland
Copy link
Collaborator

dougsland commented Oct 5, 2024

Our CI/CD currently is failing, I don't believe it's something hot but we must fix it ASAP. Extra help is welcome.

cc @pbrilla-rh @pengshanyu @nsednev @Yarboa @leistnerova

# STDOUT:
---v---v---v---v---v---

---^---^---^---^---^---

# STDERR:
---v---v---v---v---v---
/var/ARTIFACTS/work-ffi14_gg2z9
Found 1 plan.

/plans/e2e/ffi
summary: FFI - QM FreedomFromInterference
    discover
        how: fmf
        directory: /var/ARTIFACTS/git-645c4d7ee803e1756b330901fc97022ef5136702oool1s9k
        hash: b53b7b3
        filter: tag:ffi
        summary: 22 tests selected
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/devices/check_dev_console
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/devices/check_dev_disk
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/devices/check_dev_kmsg
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/regular_os_files/check_etc_qm
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/regular_os_files/check_usr_lib_qm
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/regular_os_files/check_usr_share_qm
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/shared_memory_files/check_dev_shm
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/sockets/check_run_systemd_journal_socket
            /tests/ffi/attempts_to_access_forbidden_file_system_resource/sockets/check_run_udev_control
            /tests/ffi/deny_sched_setattr
            /tests/ffi/deny_set_scheduler
            /tests/ffi/dev_mem_not_present
            /tests/ffi/disk
            /tests/ffi/memory
            /tests/ffi/modules
            /tests/ffi/qm-oom-score-adj
            /tests/ffi/qm-unit-files
            /tests/ffi/qm_nested
            /tests/ffi/selinux
            /tests/ffi/sysctl
            /tests/ffi/tcp_max_syn_backlog
            /tests/ffi/agent-flood
    provision
        queued provision.provision task #1: default-0
        
        provision.provision task #1: default-0
        how: connect
        primary address: 18.218.104.162
        topology address: 18.218.104.162
        port: 22
        key: /etc/citool.d/id_rsa_artemis
        multihost name: default-0
        arch: x86_64
        distro: CentOS Stream 9
        kernel: 5.14.0-513.el9.x86_64
        package manager: dnf
        selinux: yes
        is superuser: yes
    
        summary: 1 guest provisioned
    prepare
        queued push task #1: push to default-0
        
        push task #1: push to default-0
    
        queued prepare task #1: Install podman on default-0
        queued prepare task #2: Set QM env on default-0
        queued prepare task #3: requires on default-0
        
        prepare task #1: Install podman on default-0
        how: install
        name: Install podman
        package: 1 package requested
            podman
        
        prepare task #2: Set QM env on default-0
        how: shell
        name: Set QM env
        overview: 1 script found
        script:
            cd tests/e2e
            ./set-ffi-env-e2e "--set-qm-disk-part=yes"
        fail: Command '/var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tmt-prepare-wrapper.sh-Set-QM-env-default-0' returned 1.
    finish
    
        summary: 0 tasks completed

plan failed

The exception was caused by 1 earlier exceptions

Cause number 1:

    prepare step failed

    The exception was caused by 1 earlier exceptions

    Cause number 1:

        Command '/var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tmt-prepare-wrapper.sh-Set-QM-env-default-0' returned 1.

        stdout (38 lines)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        [ INFO  ] Starting setup
        [ INFO  ] ==============================

        [ INFO  ] Check if qm requires additional partition
        [ INFO  ] ==============================
        [ INFO  ] select_disk_to_partition, found nvme0n1
        nvme1n1
        [ INFO  ] ==============================
        Calling sync before partitioning /dev/nvme1n1...
        Creating new partition on /dev/nvme1n1...
        [ INFO  ] select_disk_to_partition, found /dev/nvme1n1p1
        [ INFO  ] ==============================
        Rescanning partition table on /dev/nvme1n1...
        Test artifacts repository                       890 kB/s | 2.0 kB     00:00    
        Package parted-3.5-2.el9.x86_64 is already installed.
        Dependencies resolved.
        Nothing to do.
        Complete!
        Partitioning and kernel update completed for /dev/nvme1n1p1
        meta-data=/dev/nvme1n1p1         isize=512    agcount=4, agsize=1179584 blks
                 =                       sectsz=4096  attr=2, projid32bit=1
                 =                       crc=1        finobt=1, sparse=1, rmapbt=0
                 =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
        data     =                       bsize=4096   blocks=4718336, imaxpct=25
                 =                       sunit=0      swidth=0 blks
        naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
        log      =internal log           bsize=4096   blocks=16384, version=2
                 =                       sectsz=4096  sunit=1 blks, lazy-count=1
        realtime =none                   extsz=4096   blocks=0, rtextents=0
        [ INFO  ] Create_qm_disks, prepare regular xfs fs
        [ INFO  ] ==============================
        [ INFO  ] Create_qm_disks, prepare and mount /var/qm
        [ INFO  ] ==============================

        [ INFO  ] Checking if QM already installed
        [ INFO  ] ==============================
        [ INFO  ] QM Enabled and not Active
        [ INFO  ] ==============================
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

        stderr (270 lines)
        ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
        ++ date +%s
        + START_TIME=1728131853
        +++ dirname -- ./set-ffi-env-e2e
        ++ cd -- .
        ++ pwd
        + SCRIPT_DIR=/var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tests/e2e
        + source /var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tests/e2e/lib/utils
        + source /var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tests/e2e/lib/container
        + source /var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tests/e2e/lib/systemd
        + source /var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tests/e2e/lib/tests
        ++ NODES_FOR_TESTING_ARR='control qm-node1'
        ++ readarray -d ' ' -t NODES_FOR_TESTING
        ++ CONTROL_CONTAINER_NAME=host
        ++ WAIT_BLUECHI_AGENT_CONNECT=5
        + source /var/ARTIFACTS/work-ffi14_gg2z9/plans/e2e/ffi/tree/tests/e2e/lib/diskutils
    + export CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + CONFIG_NODE_AGENT_PATH=/etc/bluechi/agent.conf.d/agent.conf
    + export REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + REGISTRY_UBI8_MINIMAL=registry.access.redhat.com/ubi8/ubi-minimal
    + export WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + WAIT_BLUECHI_SERVER_BE_READY_IN_SEC=5
    + export CONTROL_CONTAINER_NAME=control
    + CONTROL_CONTAINER_NAME=control
    + NODES_FOR_TESTING=('control' 'node1')
    + export NODES_FOR_TESTING
    + export IP_CONTROL_MACHINE=
    + IP_CONTROL_MACHINE=
    + export CONTAINER_CAP_ADD=
    + CONTAINER_CAP_ADD=
    + export ARCH=
    + ARCH=
    + export DISK=
    + DISK=
    + export PART_ID=
    + PART_ID=
    + export QC_SOC=SA8775P
    + QC_SOC=SA8775P
    + export SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + SOC_DISTRO_FILE=/sys/devices/soc0/machine
    + export QC_SOC_DISK=sde
    + QC_SOC_DISK=sde
    + export BUILD_BLUECHI_FROM_GH_URL=
    + BUILD_BLUECHI_FROM_GH_URL=
    + export QM_GH_URL=
    + QM_GH_URL=
    + export BRANCH_QM=
    + BRANCH_QM=
    + export SET_QM_PART=
    + SET_QM_PART=
    + export USE_QM_COPR=packit/containers-qm-600
    + USE_QM_COPR=packit/containers-qm-600
    + RED='\033[91m'
    + GRN='\033[92m'
    + CLR='\033[0m'
    + ARGUMENT_LIST=("qm-setup-from-gh-url" "branch-qm" "set-qm-disk-part" "use-qm-copr")
    +++ printf help,%s:, qm-setup-from-gh-url branch-qm set-qm-disk-part use-qm-copr
    +++ basename ./set-ffi-env-e2e
    ++ getopt --longoptions help,qm-setup-from-gh-url:,help,branch-qm:,help,set-qm-disk-part:,help,use-qm-copr:, --name set-ffi-env-e2e --options '' -- --set-qm-disk-part=yes
    + opts=' --set-qm-disk-part '\''yes'\'' --'
    + eval set '-- --set-qm-disk-part '\''yes'\'' --'
    ++ set -- --set-qm-disk-part yes --
    + '[' 3 -gt 0 ']'
    + case "$1" in
    + SET_QM_PART=yes
    + shift 2
    + '[' 1 -gt 0 ']'
    + case "$1" in
    + break
    + info_message 'Starting setup'
    + '[' -z 'Starting setup' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Starting setup'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' 0 -ne 0 ']'
    + stat /run/ostree-booted
    + echo
    + info_message 'Check if qm requires additional partition'
    + '[' -z 'Check if qm requires additional partition' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Check if qm requires additional partition'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + '[' -n yes ']'
    + create_qm_var_part
    + select_disk_to_partition
    + local disk_table
    ++ lsblk --noheadings --raw
    + disk_table='nvme0n1 259:0 0 100G 0 disk 
    nvme0n1p1 259:2 0 1M 0 part 
    nvme0n1p2 259:3 0 100G 0 part /
    nvme1n1 259:1 0 18G 0 disk '
    + local disks_arr
    ++ echo 'nvme0n1 259:0 0 100G 0 disk 
    nvme0n1p1 259:2 0 1M 0 part 
    nvme0n1p2 259:3 0 100G 0 part /
    nvme1n1 259:1 0 18G 0 disk '
    ++ awk '$1~// && $6=="disk" {print $1}'
    + disks_arr='nvme0n1
    nvme1n1'
    + info_message 'select_disk_to_partition, found nvme0n1
    nvme1n1'
    + '[' -z 'select_disk_to_partition, found nvme0n1
    nvme1n1' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] select_disk_to_partition, found nvme0n1
    nvme1n1'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + for disk in $disks_arr
    + [[ nvme0n1 == \v\d\a ]]
    ++ echo 'nvme0n1 259:0 0 100G 0 disk 
    nvme0n1p1 259:2 0 1M 0 part 
    nvme0n1p2 259:3 0 100G 0 part /
    nvme1n1 259:1 0 18G 0 disk '
    ++ grep -c nvme0n1
    + [[ 3 -eq 1 ]]
    + '[' -e /sys/devices/soc0/machine ']'
    + for disk in $disks_arr
    + [[ nvme1n1 == \v\d\a ]]
    ++ echo 'nvme0n1 259:0 0 100G 0 disk 
    nvme0n1p1 259:2 0 1M 0 part 
    nvme0n1p2 259:3 0 100G 0 part /
    nvme1n1 259:1 0 18G 0 disk '
    ++ grep -c nvme1n1
    + [[ 1 -eq 1 ]]
    + [[ nvme1n1 != \z\r\a\m\0 ]]
    + echo 'Calling sync before partitioning /dev/nvme1n1...'
    + sync
    + echo 'Creating new partition on /dev/nvme1n1...'
    ++ echo n
    ++ echo p
    ++ echo
    ++ echo
    ++ echo
    ++ echo w
    ++ fdisk /dev/nvme1n1
    + new_part='
    Welcome to fdisk (util-linux 2.37.4).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    Device does not contain a recognized partition table.
    Created a new DOS disklabel with disk identifier 0xb949b596.

    Command (m for help): Partition type
       p   primary (0 primary, 0 extended, 4 free)
       e   extended (container for logical partitions)
    Select (default p): Partition number (1-4, default 1): First sector (2048-37748735, default 2048): Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-37748735, default 37748735): 
    Created a new partition 1 of type '\''Linux'\'' and of size 18 GiB.

    Command (m for help): The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Syncing disks.'
    ++ grep -Po 'new partition \K([0-9])'
    ++ echo '
    Welcome to fdisk (util-linux 2.37.4).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.

    Device does not contain a recognized partition table.
    Created a new DOS disklabel with disk identifier 0xb949b596.

    Command (m for help): Partition type
       p   primary (0 primary, 0 extended, 4 free)
       e   extended (container for logical partitions)
    Select (default p): Partition number (1-4, default 1): First sector (2048-37748735, default 2048): Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-37748735, default 37748735): 
    Created a new partition 1 of type '\''Linux'\'' and of size 18 GiB.

    Command (m for help): The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Syncing disks.'
    + part_id=1
    + DISK=nvme1n1
    ++ grep -c nvme
    ++ echo nvme1n1
    + [[ 1 -eq 1 ]]
    + PART_ID=p1
    + info_message 'select_disk_to_partition, found /dev/nvme1n1p1'
    + '[' -z 'select_disk_to_partition, found /dev/nvme1n1p1' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] select_disk_to_partition, found /dev/nvme1n1p1'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + break
    + [[ -n nvme1n1 ]]
    + echo 'Rescanning partition table on /dev/nvme1n1...'
    + rpm -qf qm
    + dnf install -y parted
    + partprobe
    + echo 'Partitioning and kernel update completed for /dev/nvme1n1p1'
    + local slash_var
    + slash_var=/var/qm
    + '[' -e /sys/devices/soc0/machine ']'
    + mkfs.xfs /dev/nvme1n1p1
    + if_error_exit 'Error: mkfs.xfs /dev/nvme1n1p1 failed on VM'
    + local exit_code=0
    + '[' 0 '!=' 0 ']'
    + info_message 'Create_qm_disks, prepare regular xfs fs'
    + '[' -z 'Create_qm_disks, prepare regular xfs fs' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Create_qm_disks, prepare regular xfs fs'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + mkdir -p /new_var
    + mount /dev/nvme1n1p1 /new_var
    + test -d /var/qm
    + mkdir -p /var/qm
    + info_message 'Create_qm_disks, prepare and mount /var/qm'
    + '[' -z 'Create_qm_disks, prepare and mount /var/qm' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Create_qm_disks, prepare and mount /var/qm'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + mount /dev/nvme1n1p1 /var/qm
    ++ ls -A /var/qm
    + '[' -n '' ']'
    + echo
    + info_message 'Checking if QM already installed'
    + '[' -z 'Checking if QM already installed' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] Checking if QM already installed'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + rpm -q qm
    ++ systemctl is-enabled qm
    + QM_STATUS=generated
    + '[' generated == generated ']'
    ++ systemctl is-active qm
    + '[' inactive == active ']'
    + test -d /var/qm -a -d /etc/qm
    + info_message 'QM Enabled and not Active'
    + '[' -z 'QM Enabled and not Active' ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] QM Enabled and not Active'
    + info_message ==============================
    + '[' -z ============================== ']'
    + BLUE='\033[94m'
    + ENDCOLOR='\033[0m'
    + echo -e '[ \033[94mINFO\033[0m  ] =============================='
    + exit 1
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

---^---^---^---^---^---

tmt-reproducer

@dougsland
Copy link
Collaborator Author

Resolved by #604

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant