Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix non-deterministic behaviour in ModelLoadingServiceTests #55008

Merged
merged 2 commits into from
Apr 15, 2020

Conversation

davidkyle
Copy link
Member

ModelLoadingServiceTests::testMaxCachedLimitReached is testing models get evicted from the cache when the total size of all the models is too large for the cache. There is some non-deterministic behaviour when the models are first loaded as it it known what order they are loaded in and therefore which will be evicted by a later load. This change adjusts the assertions to account for that uncertainty.

I believe the non-deterministic behaviour comes from a loss of ordering going from a list of IDs to a set. Model loading is mocked so there shouldn't be a race there.

Closes #54986

@davidkyle davidkyle added >test Issues or PRs that are addressing/adding tests :ml Machine learning v8.0.0 v7.7.0 labels Apr 9, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core (:ml)

@davidkyle davidkyle requested a review from benwtrent April 9, 2020 12:21
@@ -353,9 +353,9 @@ private void auditIfNecessary(String modelId, MessageSupplier msg) {
logger.trace(() -> new ParameterizedMessage("[{}] {}", modelId, msg.get().getFormattedMessage()));
return;
}
auditor.warning(modelId, msg.get().getFormattedMessage());
auditor.info(modelId, msg.get().getFormattedMessage());
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A model being evicted from the cache is a routine event and not a warning level event. I would prefer this to be debug as it really is a non-issue (the model will be reloaded if required) but there is very little insight into the internals of the cache (which models are loaded etc). Once we have a model management system this should be debug or trace.

I'm raising here for discussion before I forget but I'd be happy to revert the change for this PR

Copy link
Member

@benwtrent benwtrent left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran the tests locally 1000s of times with this patch and passes 100% ✅

:shipit:

@davidkyle davidkyle force-pushed the fix-model-loading-test branch from 47e800c to 42baf3f Compare April 14, 2020 08:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:ml Machine learning >test Issues or PRs that are addressing/adding tests v7.7.0 v8.0.0-alpha1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[CI] ModelLoadingServiceTests.testMaxCachedLimitReached failing
4 participants