You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've come across this issue while running a datafeed job a few times sequentially, every time with the same job name. The stats (it was clearly visible looking at timing_stats.bucket_count) were getting bigger and bigger as if they were accumulating multiple job runs.
After analyzing the code I found out that the TimingStats document is not deleted when the job is deleted.
The text was updated successfully, but these errors were encountered:
Actually, my conclusions were premature.
In fact there exists code (in TransportDeleteJobAction.java) that deletes from the shared results index all the documents having 'job_id' == jobToBeDeleted:
DeleteByQueryRequest request = new DeleteByQueryRequest(indexNames.get());
ConstantScoreQueryBuilder query =
new ConstantScoreQueryBuilder(new TermQueryBuilder(Job.ID.getPreferredName(), jobId));
request.setQuery(query);
request.setIndicesOptions(MlIndicesUtils.addIgnoreUnavailable(IndicesOptions.lenientExpandOpen()));
request.setSlices(AbstractBulkByScrollRequest.AUTO_SLICES);
request.setAbortOnVersionConflict(false);
request.setRefresh(true);
So the increased 'bucket_count' in 'timing_stats' is not related to the TimingStats document not being deleted.
I've filed a separate issue (#45839) that focuses on this large 'bucket_count' values.
I've also implemented a test (#45840) that proves that the TimingStats document gets deleted together with the job.
I've come across this issue while running a datafeed job a few times sequentially, every time with the same job name. The stats (it was clearly visible looking at timing_stats.bucket_count) were getting bigger and bigger as if they were accumulating multiple job runs.
After analyzing the code I found out that the TimingStats document is not deleted when the job is deleted.
The text was updated successfully, but these errors were encountered: