-
Notifications
You must be signed in to change notification settings - Fork 302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-16809 vos: container based stable epoch #15605
base: master
Are you sure you want to change the base?
Conversation
Ticket title is 'DAOS local stable epoch' |
Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/319/log |
Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/350/log |
Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/301/log |
Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/398/log |
Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/345/log |
Test stage Build on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/521/log |
42a21a8
to
96716b9
Compare
Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/383/log |
Test stage Build on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/387/log |
Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/348/log |
Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/356/log |
Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/374/log |
Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/349/log |
1bdab81
to
128319b
Compare
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/ |
1db207f
to
eead8cc
Compare
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/ |
eead8cc
to
f3368e7
Compare
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/ |
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/ |
Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/ |
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/12/testReport/ |
c89e9f0
to
003a9ff
Compare
Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/14/testReport/ |
Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/14/testReport/ |
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/14/testReport/ |
003a9ff
to
1411e31
Compare
Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/15/testReport/ |
Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/15/testReport/ |
For the purpose of efficient calculating container based local stable epoch, we will maintain some kind of sorted list for active DTX entries with epoch order. But consider related overhead, it is not easy to maintain a strictly sorted list for all active DTX entries. For the DTX which leader resides on current target, its epoch is already sorted when generate on current engine. So the main difficulty is for those DTX entries which leaders are on remote targets. On the other hand, the local stable epoch is mainly used to generate global stable epoch that is for incremental reintegration. In fact, we do not need a very accurate global stable epoch for incremental reintegration. It means that it is no matter (or non-fatal) if the calculated stable epoch is a bit smaller than the real case. For example, seconds error for the stable epoch almost can be ignored if we compare such overhead with rebuilding the whole target from scratch. So for the DTX entry which leader is on remote target, we will maintain it in the list with relative incremental trend based on the epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to handle such unsorted DTX entries list for calculating local stable epoch. Main VOS APIs for the stable epoch: /* Calculate current locally known stable epoch for the given container. */ daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh); /* Get global stable epoch for the given container. */ daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh); /* Set global stable epoch for the given container. */ int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch); Another important enhancement in the patch is about handling potential conflict between EC/VOS aggregation and delayed modification with very old epoch. For standalone transaction, when it is started on the DTX leader, its epoch is generated by the leader, then the modification RPC will be forwarded to other related non-leader(s). If the forwarded RPC is delayed for some reason, such as network congestion or system busy on the non-leader, as to the epoch for such transaction becomes very old (exceed related threshold), as to VOS aggregation may has already aggregated related epoch rang. Under such case, the non-leader will reject such modification to avoid data lost/corruption. For distributed transaction, if there is no read (fetch, query, enumerate, and so on) before client commit_tx, then related DTX leader will generate epoch for the transaction after client commit_tx. Then it will be the same as above standalone transaction for epoch handling. If the distributed transaction involves some read before client commit_tx, its epoch will be generated by the first accessed engine for read. If the transaction takes too long time after that, then when client commit_tx, its epoch may become very old as to related DTX leader will have to reject the transaction to avoid above mentioned conflict. And even if the DTX leader did not reject the transaction, some non-leader may also reject it because of the very old epoch. So it means that under such framework, the life for a distributed transaction cannot be too long. That can be adjusted via the server side environment variable DAOS_VOS_AGG_GAP. The default value is 60 seconds. NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where lots of data records are pending to commit, so the aggregation epoch upper bound is 'current HLC - vos_agg_gap'. Signed-off-by: Fan Yong <[email protected]>
1411e31
to
88e58e1
Compare
Just reading the description and wondering...do we not have any b+tree of dtx entries? Can we not just make the key order first by epoch? |
Strictly sort DTX entries with epoch will cause too much overhead since there will be frequently DTX add/del operations. Current non-strict sort mode in the patch is just O(1) overhead, that is enough for incremental reintegration. |
@@ -436,6 +436,7 @@ class EngineYamlParameters(YamlParameters): | |||
"D_LOG_FILE_APPEND_PID=1", | |||
"DAOS_POOL_RF=4", | |||
"CRT_EVENT_DELAY=1", | |||
"DAOS_VOS_AGG_GAP=25", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Why change this from the default of 60?
- I would think we need to run this PR with affected tests? Maybe
Features: aggregation
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Why change this from the default of 60?
- I would think we need to run this PR with affected tests? Maybe
Features: aggregation
?
The environment variable DAOS_VOS_AGG_GAP
is new introduced via this patch. Before that, the gap between aggregation and current HLC was DAOS_AGG_THRESHOLD
, that was about 20 seconds. Such gap cannot match the new requirement for stable epoch in this patch because it was fixed and may be too short as to cause some transactions to be restarted frequently.
Generally, a small nvme device is more easy to be filled, so need more small aggregation gap, that is usually for test environment, such as for CI test. But in real product environment, we can use relative large configuration, that is why we say the fault is 60 seconds.
VOS aggregation will automatically run at background during CI tests except it is disabled explicitly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main concern is the default pr
tests might not be sufficient to cover this change. Even though aggregation is enabled by default, most tests do not explicitly verify aggregation behavior. I don't know this code well enough to say whether or not pr
is enough.
I don't want to block this PR unnecessarily so I will remove my -1, but please keep in mind that in general it is much more expensive to fix regressions after initial landing than in the original PR, where all we have to do is use Features: <tags>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have some concerns about setting this flag for all tests. CI testing should resemble real world testing on functional HW. the per SSD capacity in CI functional tests is not different from production systems, where also the capacity varies greatly depending on what customers want.
It's fine to change it for some particular test, but (correct me if im wrong) this change here is for all CI tests?
If that is the case, we should not land that TBH unless there is some agreement that the default should change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this is for all functional CI tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"DAOS_VOS_AGG_GAP=25" this setting makes CI tests almost keep the same behavior as without the patch (was 20+ seconds before). If without such setting, then for some short-time test, VOS aggregation may not be triggered before the test completed, that will decrease the potential race windows with VOS aggregation. Then may hide potential bugs. We can remove such setting for CI tests, but I am afraid that:
- Space pressure for small pool configuration.
- Less race with VOS aggregation.
For the purpose of efficient calculating container based local stable epoch, we will maintain some kind of sorted list for active DTX entries with epoch order. But consider related overhead, it is not easy to maintain a strictly sorted list for all active DTX entries. For the DTX which leader resides on current target, its epoch is already sorted when generate on current engine. So the main difficulty is for those DTX entries which leaders are on remote targets.
On the other hand, the local stable epoch is mainly used to generate global stable epoch that is for incremental reintegration. In fact, we do not need a very accurate global stable epoch for incremental reintegration. It means that it is no matter (or non-fatal) if the calculated stable epoch is a bit smaller than the real case. For example, seconds error for the stable epoch almost can be ignored if we compare such overhead with rebuilding the whole target from scratch. So for the DTX entry which leader is on remote target, we will maintain it in the list with relative incremental trend based on the epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to handle such unsorted DTX entries list for calculating local stable epoch.
Main VOS APIs for the stable epoch:
/* Calculate current locally known stable epoch for the given container. */ daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh);
/* Get global stable epoch for the given container. */
daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh);
/* Set global stable epoch for the given container. */
int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch);
Another important enhancement in the patch is about handling potential conflict between EC/VOS aggregation and delayed modification with very old epoch.
For standalone transaction, when it is started on the DTX leader, its epoch is generated by the leader, then the modification RPC will be forwarded to other related non-leader(s). If the forwarded RPC is delayed for some reason, such as network congestion or system busy on the non-leader, as to the epoch for such transaction becomes very old (exceed related threshold), as to VOS aggregation may has already aggregated related epoch range. Under such case, the non-leader will reject such modification to avoid data lost/corruption.
For distributed transaction, if there is no read (fetch, query, enumerate, and so on) before client commit_tx, then related DTX leader will generate epoch for the transaction after client commit_tx. Then it will be the same as above standalone transaction for epoch handling.
If the distributed transaction involves some read before client commit_tx, its epoch will be generated by the first accessed engine for read. If the transaction takes too long time after that, then when client commit_tx, its epoch may become very old as to related DTX leader will have to reject the transaction to avoid above mentioned conflict. And even if the DTX leader did not reject the transaction, some non-leader may also reject it because of the very old epoch. So it means that under such framework, the life for a distributed transaction cannot be too long. That can be adjusted via the server side environment variable DAOS_VOS_AGG_GAP. The default value is 60 seconds.
NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where
lots of data records are pending to commit, so the aggregation epoch
upper bound is 'current HLC - vos_agg_gap'.
Before requesting gatekeeper:
Features:
(orTest-tag*
) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.Gatekeeper: