-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] Nightly maintenance is not triggered #47003
Comments
Pinging @elastic/ml-core |
My observations when we were discussing this this morning were:
|
To debug this issue, I did a full cluster restart which should schedule a new nightly maintenance task. But one day later, the maintenance task still did not run (7am server time). |
The problem is much simpler than the subtle race conditions I thought of. It was introduced in 6.6: https://github.com/elastic/elasticsearch/pull/36702/files#diff-6d8a49bb3f57c032cf9c63498d1e2f89L48 It means nightly maintenance has not run for any cluster on versions 6.6.0 to 7.4.0 inclusive. I will fix this for 6.8.4/7.4.1 (and also defend the subtle races too). |
A refactoring in 6.6 meant that the ML daily maintenance actions have not been run at all since then. This change installs the local master listener that schedules the ML daily maintenance, and also defends against some subtle race conditions that could occur in the future if a node flipped very quickly between master and non-master. Fixes elastic#47003
I opened #47103 to fix this. The reason it didn't show up in any of our unit/integration tests is that the unit tests don't exercise the real on/off master listener functionality, and in integration tests we actually test the So in the tests in the elasticsearch repo we're testing most pieces of the jigsaw in isolation but not the full jigsaw. This means it's critical that the long running QA tests keep an eye out for whether daily maintenance is running in the early hours of each morning. If it is running then the unit/integration tests show it's probably doing the right thing. But if it's not running in the early hours of each morning then we're relying on the long running QA tests to tell us. |
A refactoring in 6.6 meant that the ML daily maintenance actions have not been run at all since then. This change installs the local master listener that schedules the ML daily maintenance, and also defends against some subtle race conditions that could occur in the future if a node flipped very quickly between master and non-master. Fixes #47003
Due to elastic#47003 many clusters will have built up a large backlog of expired results. On upgrading to a version where that bug is fixed users could find that the first ML daily maintenance task deletes a very large amount of documents. This change introduces throttling to the delete-by-query that the ML daily maintenance uses to delete expired results: - Average 200 documents per second - Maximum of 10 million documents per day (There is no throttling for state/forecast documents as these are expected to be lower volume.) Relates elastic#47103
A refactoring in 6.6 meant that the ML daily maintenance actions have not been run at all since then. This change installs the local master listener that schedules the ML daily maintenance, and also defends against some subtle race conditions that could occur in the future if a node flipped very quickly between master and non-master. Fixes #47003
A refactoring in 6.6 meant that the ML daily maintenance actions have not been run at all since then. This change installs the local master listener that schedules the ML daily maintenance, and also defends against some subtle race conditions that could occur in the future if a node flipped very quickly between master and non-master. Fixes #47003
A refactoring in 6.6 meant that the ML daily maintenance actions have not been run at all since then. This change installs the local master listener that schedules the ML daily maintenance, and also defends against some subtle race conditions that could occur in the future if a node flipped very quickly between master and non-master. Fixes #47003
Due to #47003 many clusters will have built up a large backlog of expired results. On upgrading to a version where that bug is fixed users could find that the first ML daily maintenance task deletes a very large amount of documents. This change introduces throttling to the delete-by-query that the ML daily maintenance uses to delete expired results to limit it to deleting an average 200 documents per second. (There is no throttling for state/forecast documents as these are expected to be lower volume.) Additionally a rough time limit of 8 hours is applied to the whole delete expired data action. (This is only rough as it won't stop part way through a single operation - it only checks the timeout between operations.) Relates #47103
Due to #47003 many clusters will have built up a large backlog of expired results. On upgrading to a version where that bug is fixed users could find that the first ML daily maintenance task deletes a very large amount of documents. This change introduces throttling to the delete-by-query that the ML daily maintenance uses to delete expired results to limit it to deleting an average 200 documents per second. (There is no throttling for state/forecast documents as these are expected to be lower volume.) Additionally a rough time limit of 8 hours is applied to the whole delete expired data action. (This is only rough as it won't stop part way through a single operation - it only checks the timeout between operations.) Relates #47103
Due to #47003 many clusters will have built up a large backlog of expired results. On upgrading to a version where that bug is fixed users could find that the first ML daily maintenance task deletes a very large amount of documents. This change introduces throttling to the delete-by-query that the ML daily maintenance uses to delete expired results to limit it to deleting an average 200 documents per second. (There is no throttling for state/forecast documents as these are expected to be lower volume.) Additionally a rough time limit of 8 hours is applied to the whole delete expired data action. (This is only rough as it won't stop part way through a single operation - it only checks the timeout between operations.) Relates #47103
Due to #47003 many clusters will have built up a large backlog of expired results. On upgrading to a version where that bug is fixed users could find that the first ML daily maintenance task deletes a very large amount of documents. This change introduces throttling to the delete-by-query that the ML daily maintenance uses to delete expired results to limit it to deleting an average 200 documents per second. (There is no throttling for state/forecast documents as these are expected to be lower volume.) Additionally a rough time limit of 8 hours is applied to the whole delete expired data action. (This is only rough as it won't stop part way through a single operation - it only checks the timeout between operations.) Relates #47103
In our test cluster, we've noticed a large number of model snapshots for our long running jobs, dating back to the day of the upgrade from 6.5.1 -> 6.6.2 -> 6.7.0.
A first investigation in the cloud logs (only past 7 days available) showed that there's no maintenance task executed.
The text was updated successfully, but these errors were encountered: