Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding subshard work items on lease expiry #1185

Closed

Conversation

AndreKurait
Copy link
Member

@AndreKurait AndreKurait commented Dec 9, 2024

Description

Continuation on #1160

Changes since #1160

  • Added LeaseExpirationTest that refreshes after every bulk put to synthetically generate more segments for the test case.
  • Added Binary search to more efficiently find the starting document segment
  • Modified initial segment sorting with error logging on equality

  • Added code to finish and create remainder work item when a lease expires.

Behavior changes is as follows:

  • Updates the LuceneDocumentsReader to sort the segments and emit docs in sequence, and skip until passed in startingDocId. Note: Docs within a segment are still read in parallel, just emitted in sequence once aggregated together.
  • Updates the RfsLuceneDocument to contain the luceneDocId (segmentBaseDoc + docId)
  • Updates the DocumentReindexer to emit a flux of the latest sequential docId processed.
  • Updates the DocumentsRunner to plum context from work item progress and cancellation to the LeaseEnd
  • Updates the OpenSearchWorkCoordinator to rename numAttempts to nextAcquisitionLeaseExponent (incrementing script version from poc -> 2.0)
  • Updates exitOnLeaseTimeout to handle cancelling document reindexing work, and creating successor work item based on progress checkpoint and shard work timing. Logic below.

The lease time increase logic has changed. Behavior is as follows:

  • If worker did not have enough time to process any docs, time is doubled for next run.
  • else
    • If worker spent more than 10% of time downloading/extracting the shard, double the lease time for the next run
    • else if worker spent less then 2.5% of time downloading/extracting the shard, half the lease time for next run
    • else keep lease time same for successive run

Added E2E test as follows:

  • Create docs with workload generator, set up toxiproxy and leases to ensure will take 8 20 second leases to finish shard with checkpoints. Verify exit codes and docs migrated

Issues Resolved

Testing

Tested in AWS and added new E2E test around the scenario

Check List

  • New functionality includes testing
    • All tests pass, including unit test, integration test and doctest
  • New functionality has been documented
  • Commits are signed per the DCO using --signoff

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.

Copy link

codecov bot commented Dec 9, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 80.75%. Comparing base (fc62f57) to head (bfce5e1).

Additional details and impacted files
@@             Coverage Diff              @@
##               main    #1185      +/-   ##
============================================
- Coverage     80.93%   80.75%   -0.19%     
- Complexity     2995     3009      +14     
============================================
  Files           407      409       +2     
  Lines         15241    15439     +198     
  Branches       1021     1034      +13     
============================================
+ Hits          12336    12468     +132     
- Misses         2277     2341      +64     
- Partials        628      630       +2     
Flag Coverage Δ
unittests 80.75% <ø> (-0.19%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@peternied peternied left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for keeping up with this Andre.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm looking for a test that starts migration, progress is reported, then we stop RFS (simulating something going wrong), and then start a fresh instance, sure that we restarted from the correct point in the process until completion. Do we have a test that verifies this while reading a legit snapshot? (Work coordinator could be mocked or real IMO)

Copy link
Member Author

@AndreKurait AndreKurait Dec 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then we stop RFS (simulating something going wrong)

we currently only save progress when a lease expires. What aspect of the system would this be verifying that isn't in LeaseExpirationTest?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Our most recent customer was running into OOM of the container- in that case we would never have updated the progress. In another case I expect, if customers dialed the rfs work scale to 0 so they paused the migration, we wouldn't want to loose that progress either

How hard would it be to add 60 second check-ins? 🤞 I hope pretty easy

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In another case I expect, if customers dialed the rfs work scale to 0 so they paused the migration, we wouldn't want to loose that progress either

That work is being tracked in https://opensearch.atlassian.net/issues/MIGRATIONS-2172

Signed-off-by: Andre Kurait <[email protected]>
@AndreKurait
Copy link
Member Author

Continuing in #1198

@AndreKurait AndreKurait deleted the MIGRATIONS-2128 branch December 12, 2024 22:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants