Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ML] Implement AucRoc metric for classification evaluation. #62160

Closed
przemekwitek opened this issue Sep 9, 2020 · 1 comment
Closed

[ML] Implement AucRoc metric for classification evaluation. #62160

przemekwitek opened this issue Sep 9, 2020 · 1 comment
Labels
>feature :ml Machine learning

Comments

@przemekwitek
Copy link
Contributor

Currently, the ML Evaluate API exposes AucRoc metric for evaluating the results of the outlier detection analysis.
Such a metric would also be useful for evaluating (multiclass) classification analysis.

@przemekwitek
Copy link
Contributor Author

I consider this done.
There is still one outstanding issue with providing user-friendly error message when the old destination index is encountered in Evaluate request. But this is just a small enhancement that should not block this issue from being closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>feature :ml Machine learning
Projects
None yet
Development

No branches or pull requests

1 participant