Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not fail Evaluate API when the actual and predicted fields' types differ #54255

Merged
merged 2 commits into from
Mar 27, 2020

Conversation

przemekwitek
Copy link
Contributor

@przemekwitek przemekwitek commented Mar 26, 2020

Currently, Evaluate API fails when the user uses actual and predicted fields of different types (e.g. long vs boolean) as for some metrics (accuracy, precision) term aggregation tries to parse one type as another.
This PR replaces term aggregation with match + lenient: true. This way the Evaluate API does not fail but returns no matches (just as if the sets of actual and predicted classes were disjoint).

Additionally, I've took a chance and added a number of new test cases to ClassificationEvaluationIT to increase coverage in case of various actual vs predicted types combinations.

Closes #54079

@przemekwitek przemekwitek removed the WIP label Mar 26, 2020
@przemekwitek przemekwitek marked this pull request as ready for review March 26, 2020 09:37
@elasticmachine
Copy link
Collaborator

Pinging @elastic/ml-core (:ml)

@przemekwitek
Copy link
Contributor Author

run elasticsearch-ci/packaging-sample-matrix-unix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Evaluate API fails when there is type mismatch between actual and predicted fields.
4 participants