-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] anomaly detection Job failed without reason audit message when model snapshot is too old #80621
Comments
Pinging @elastic/ml-core (Team:ML) |
Some extra information about this:
So we should add an audit message that records what happens when an open job goes into the |
The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes elastic#48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes elastic#80621
This is actually one particular case of a broader problem. There are a number of reasons why a job can fail when it is assigned to a new node, for example:
None of these were creating an audit message, so would have been invisible to the end user viewing the UI. #80665 should fix those 3 (which existed in 7.x) plus the one this issue was opened for (which is new in 8.0). |
) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes #48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes #80621
…stic#80665) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes elastic#48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes elastic#80621
…stic#80665) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes elastic#48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes elastic#80621
) (#80677) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes #48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes #80621
…ed (#80665) (#80678) * [ML] Audit job open failures and stop any corresponding datafeed (#80665) The anomaly detection code contained an assumption dating back to 2016 that if a job failed then its datafeed would notice and stop itself. That works if the job fails on a node after it has successfully started up. But it doesn't work if the job fails during the startup sequence. If the job is being started for the first time then the datafeed won't be running, so there's no problem, but if the job fails when it's being reassigned to a new node then it breaks down, because the datafeed is started by not assigned to any node at that instant. This PR addresses this by making the job force-stop its own datafeed if it fails during its startup sequence and the datafeed is started. Fixes #48934 Additionally, auditing of job failures during the startup sequence is moved so that it happens for all failure scenarios instead of just one. Fixes #80621 * Removing 8.0+ test that won't work on 7.16
Steps to reproduce (easier to do this from a cloud env)
AD job failed on UI, and there is no failed reason being thrown, neither in job message. (At the time of 11:56 on the below screenshot)
After stop datafeed and force-close job (at the time of 16:01 on above screen), restarting job gave the root cause of job failure, which was caused by old model snapshot:
Expected behaviour:
Good failure message on UI and job messages
The text was updated successfully, but these errors were encountered: