Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-16809 vos: container based stable epoch #15605

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Nasf-Fan
Copy link
Contributor

@Nasf-Fan Nasf-Fan commented Dec 12, 2024

For the purpose of efficient calculating container based local stable epoch, we will maintain some kind of sorted list for active DTX entries with epoch order. But consider related overhead, it is not easy to maintain a strictly sorted list for all active DTX entries. For the DTX which leader resides on current target, its epoch is already sorted when generate on current engine. So the main difficulty is for those DTX entries which leaders are on remote targets.

On the other hand, the local stable epoch is mainly used to generate global stable epoch that is for incremental reintegration. In fact, we do not need a very accurate global stable epoch for incremental reintegration. It means that it is no matter (or non-fatal) if the calculated stable epoch is a bit smaller than the real case. For example, seconds error for the stable epoch almost can be ignored if we compare such overhead with rebuilding the whole target from scratch. So for the DTX entry which leader is on remote target, we will maintain it in the list with relative incremental trend based on the epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to handle such unsorted DTX entries list for calculating local stable epoch.

Main VOS APIs for the stable epoch:

/* Calculate current locally known stable epoch for the given container. */ daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh);

/* Get global stable epoch for the given container. */
daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh);

/* Set global stable epoch for the given container. */
int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch);

Another important enhancement in the patch is about handling potential conflict between EC/VOS aggregation and delayed modification with very old epoch.

For standalone transaction, when it is started on the DTX leader, its epoch is generated by the leader, then the modification RPC will be forwarded to other related non-leader(s). If the forwarded RPC is delayed for some reason, such as network congestion or system busy on the non-leader, as to the epoch for such transaction becomes very old (exceed related threshold), as to VOS aggregation may has already aggregated related epoch range. Under such case, the non-leader will reject such modification to avoid data lost/corruption.

For distributed transaction, if there is no read (fetch, query, enumerate, and so on) before client commit_tx, then related DTX leader will generate epoch for the transaction after client commit_tx. Then it will be the same as above standalone transaction for epoch handling.

If the distributed transaction involves some read before client commit_tx, its epoch will be generated by the first accessed engine for read. If the transaction takes too long time after that, then when client commit_tx, its epoch may become very old as to related DTX leader will have to reject the transaction to avoid above mentioned conflict. And even if the DTX leader did not reject the transaction, some non-leader may also reject it because of the very old epoch. So it means that under such framework, the life for a distributed transaction cannot be too long. That can be adjusted via the server side environment variable DAOS_VOS_AGG_GAP. The default value is 60 seconds.

NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where
lots of data records are pending to commit, so the aggregation epoch
upper bound is 'current HLC - vos_agg_gap'.

Before requesting gatekeeper:

  • Two review approvals and any prior change requests have been resolved.
  • Testing is complete and all tests passed or there is a reason documented in the PR why it should be force landed and forced-landing tag is set.
  • Features: (or Test-tag*) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.
  • Commit messages follows the guidelines outlined here.
  • Any tests skipped by the ticket being addressed have been run and passed in the PR.

Gatekeeper:

  • You are the appropriate gatekeeper to be landing the patch.
  • The PR has 2 reviews by people familiar with the code, including appropriate owners.
  • Githooks were used. If not, request that user install them and check copyright dates.
  • Checkpatch issues are resolved. Pay particular attention to ones that will show up on future PRs.
  • All builds have passed. Check non-required builds for any new compiler warnings.
  • Sufficient testing is done. Check feature pragmas and test tags and that tests skipped for the ticket are run and now pass with the changes.
  • If applicable, the PR has addressed any potential version compatibility issues.
  • Check the target branch. If it is master branch, should the PR go to a feature branch? If it is a release branch, does it have merge approval in the JIRA ticket.
  • Extra checks if forced landing is requested
    • Review comments are sufficiently resolved, particularly by prior reviewers that requested changes.
    • No new NLT or valgrind warnings. Check the classic view.
    • Quick-build or Quick-functional is not used.
  • Fix the commit message upon landing. Check the standard here. Edit it to create a single commit. If necessary, ask submitter for a new summary.

Copy link

Ticket title is 'DAOS local stable epoch'
Status is 'In Progress'
Labels: 'Rebuild'
https://daosio.atlassian.net/browse/DAOS-16809

@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/319/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/350/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/301/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/398/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/1/execution/node/345/log

@daosbuild1
Copy link
Collaborator

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch from 42a21a8 to 96716b9 Compare December 12, 2024 16:20
@daosbuild1
Copy link
Collaborator

Test stage Build on Leap 15.5 with Intel-C and TARGET_PREFIX completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/383/log

@daosbuild1
Copy link
Collaborator

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/348/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/356/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/374/log

@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15605/2/execution/node/349/log

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch 2 times, most recently from 1bdab81 to 128319b Compare December 12, 2024 16:48
@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/4/testReport/

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch 2 times, most recently from 1db207f to eead8cc Compare December 13, 2024 07:42
@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/6/testReport/

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch from eead8cc to f3368e7 Compare December 13, 2024 10:34
@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev with memcheck on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/7/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/12/testReport/

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch 2 times, most recently from c89e9f0 to 003a9ff Compare December 16, 2024 07:38
@daosbuild1
Copy link
Collaborator

Test stage Unit Test on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/14/testReport/

@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/14/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/14/testReport/

@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch from 003a9ff to 1411e31 Compare December 16, 2024 11:16
@daosbuild1
Copy link
Collaborator

Test stage NLT on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/15/testReport/

@daosbuild1
Copy link
Collaborator

Test stage Unit Test bdev on EL 8.8 completed with status UNSTABLE. https://build.hpdd.intel.com/job/daos-stack/job/daos//view/change-requests/job/PR-15605/15/testReport/

For the purpose of efficient calculating container based local stable epoch,
we will maintain some kind of sorted list for active DTX entries with epoch
order. But consider related overhead, it is not easy to maintain a strictly
sorted list for all active DTX entries. For the DTX which leader resides on
current target, its epoch is already sorted when generate on current engine.
So the main difficulty is for those DTX entries which leaders are on remote
targets.

On the other hand, the local stable epoch is mainly used to generate global
stable epoch that is for incremental reintegration. In fact, we do not need
a very accurate global stable epoch for incremental reintegration. It means
that it is no matter (or non-fatal) if the calculated stable epoch is a bit
smaller than the real case. For example, seconds error for the stable epoch
almost can be ignored if we compare such overhead with rebuilding the whole
target from scratch. So for the DTX entry which leader is on remote target,
we will maintain it in the list with relative incremental trend based on the
epoch instead of strict sorting the epoch. We introduce an O(1) algorithm to
handle such unsorted DTX entries list for calculating local stable epoch.

Main VOS APIs for the stable epoch:

/* Calculate current locally known stable epoch for the given container. */
daos_epoch_t vos_cont_get_local_stable_epoch(daos_handle_t coh);

/* Get global stable epoch for the given container. */
daos_epoch_t vos_cont_get_global_stable_epoch(daos_handle_t coh);

/* Set global stable epoch for the given container. */
int vos_cont_set_global_stable_epoch(daos_handle_t coh, daos_epoch_t epoch);

Another important enhancement in the patch is about handling potential
conflict between EC/VOS aggregation and delayed modification with very
old epoch.

For standalone transaction, when it is started on the DTX leader, its epoch
is generated by the leader, then the modification RPC will be forwarded to
other related non-leader(s). If the forwarded RPC is delayed for some reason,
such as network congestion or system busy on the non-leader, as to the epoch
for such transaction becomes very old (exceed related threshold), as to VOS
aggregation may has already aggregated related epoch rang. Under such case,
the non-leader will reject such modification to avoid data lost/corruption.

For distributed transaction, if there is no read (fetch, query, enumerate,
and so on) before client commit_tx, then related DTX leader will generate
epoch for the transaction after client commit_tx. Then it will be the same
as above standalone transaction for epoch handling.

If the distributed transaction involves some read before client commit_tx,
its epoch will be generated by the first accessed engine for read. If the
transaction takes too long time after that, then when client commit_tx, its
epoch may become very old as to related DTX leader will have to reject the
transaction to avoid above mentioned conflict. And even if the DTX leader
did not reject the transaction, some non-leader may also reject it because
of the very old epoch. So it means that under such framework, the life for
a distributed transaction cannot be too long. That can be adjusted via the
server side environment variable DAOS_VOS_AGG_GAP. The default value is 60
seconds.

NOTE: EC/VOS aggregation should avoid aggregating in the epoch range where
      lots of data records are pending to commit, so the aggregation epoch
      upper bound is 'current HLC - vos_agg_gap'.

Signed-off-by: Fan Yong <[email protected]>
@Nasf-Fan Nasf-Fan force-pushed the Nasf-Fan/DAOS-16809_1 branch from 1411e31 to 88e58e1 Compare December 16, 2024 15:54
@Nasf-Fan Nasf-Fan marked this pull request as ready for review December 17, 2024 01:48
@Nasf-Fan Nasf-Fan requested review from a team as code owners December 17, 2024 01:48
@jolivier23
Copy link
Contributor

Just reading the description and wondering...do we not have any b+tree of dtx entries? Can we not just make the key order first by epoch?

@Nasf-Fan Nasf-Fan requested review from liuxuezhao and gnailzenh and removed request for a team December 17, 2024 07:16
@Nasf-Fan
Copy link
Contributor Author

Nasf-Fan commented Dec 17, 2024

Just reading the description and wondering...do we not have any b+tree of dtx entries? Can we not just make the key order first by epoch?

Strictly sort DTX entries with epoch will cause too much overhead since there will be frequently DTX add/del operations.

Current non-strict sort mode in the patch is just O(1) overhead, that is enough for incremental reintegration.

@@ -436,6 +436,7 @@ class EngineYamlParameters(YamlParameters):
"D_LOG_FILE_APPEND_PID=1",
"DAOS_POOL_RF=4",
"CRT_EVENT_DELAY=1",
"DAOS_VOS_AGG_GAP=25",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why change this from the default of 60?
  2. I would think we need to run this PR with affected tests? Maybe Features: aggregation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why change this from the default of 60?
  2. I would think we need to run this PR with affected tests? Maybe Features: aggregation?

The environment variable DAOS_VOS_AGG_GAP is new introduced via this patch. Before that, the gap between aggregation and current HLC was DAOS_AGG_THRESHOLD, that was about 20 seconds. Such gap cannot match the new requirement for stable epoch in this patch because it was fixed and may be too short as to cause some transactions to be restarted frequently.
Generally, a small nvme device is more easy to be filled, so need more small aggregation gap, that is usually for test environment, such as for CI test. But in real product environment, we can use relative large configuration, that is why we say the fault is 60 seconds.

VOS aggregation will automatically run at background during CI tests except it is disabled explicitly.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is the default pr tests might not be sufficient to cover this change. Even though aggregation is enabled by default, most tests do not explicitly verify aggregation behavior. I don't know this code well enough to say whether or not pr is enough.
I don't want to block this PR unnecessarily so I will remove my -1, but please keep in mind that in general it is much more expensive to fix regressions after initial landing than in the original PR, where all we have to do is use Features: <tags>

Copy link
Contributor

@mchaarawi mchaarawi Dec 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have some concerns about setting this flag for all tests. CI testing should resemble real world testing on functional HW. the per SSD capacity in CI functional tests is not different from production systems, where also the capacity varies greatly depending on what customers want.
It's fine to change it for some particular test, but (correct me if im wrong) this change here is for all CI tests?
If that is the case, we should not land that TBH unless there is some agreement that the default should change.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is for all functional CI tests

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"DAOS_VOS_AGG_GAP=25" this setting makes CI tests almost keep the same behavior as without the patch (was 20+ seconds before). If without such setting, then for some short-time test, VOS aggregation may not be triggered before the test completed, that will decrease the potential race windows with VOS aggregation. Then may hide potential bugs. We can remove such setting for CI tests, but I am afraid that:

  1. Space pressure for small pool configuration.
  2. Less race with VOS aggregation.

@daltonbohning daltonbohning dismissed their stale review December 18, 2024 15:34

defer on Features usage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

5 participants