-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-16265 test: Fix erasurecode/rebuild_fio.py out of space (#15020) #15340
Conversation
Prevent accumulating large server log files caused by temporarily enabling the DEBUG log mask while creating or destroying pools. Skip-unit-tests: true Skip-fault-injection-test: true Test-tag: EcodFioRebuild EcodOnlineMultFail Skip-func-hw-test-large-md-on-ssd: false Signed-off-by: Phil Henderson <[email protected]>
Ticket title is '[12-24]-./erasurecode/rebuild_fio.py:EcodFioRebuild.test_ec_online_rebuild_fio tests fail due to daos_server startup problem.' |
Test stage Functional Hardware Large completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15340/1/execution/node/960/log |
Test stage Functional Hardware Large MD on SSD completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15340/1/execution/node/976/log |
Failures in https://build.hpdd.intel.com/job/daos-stack/job/daos/job/PR-15340/1/testReport/:
In the Functional HW Large MD on SSD stage the 24-./erasurecode/multiple_failure.py:EcodOnlineMultFail.test_ec_single_target_rank_failure test passed with the max use percentage being 51%:
|
Prevent accumulating large server log files caused by temporarily enabling the DEBUG log mask while creating or destroying pools.
Skip-unit-tests: true
Skip-fault-injection-test: true
Test-tag: EcodFioRebuild EcodOnlineMultFail
Skip-func-hw-test-large-md-on-ssd: false
Before requesting gatekeeper:
Features:
(orTest-tag*
) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.Gatekeeper: