-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH Emphasize discussion on multi-class classification in tree notebook #730
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pushed some commits to adjust the colormaps to:
- avoid using a binary Red Blue colormap for multiclass decision boundary,
- use a continuous viridis for the predict_proba plots because 0.5 is not meaningful any more in a one vs rest setting.
Below are some more suggestions, but beyond this, LGTM.
python_scripts/trees_sol_01.py
Outdated
# For example, in the plot below, the first plot on the left shows in red the | ||
# certainty on classifying a data point as belonging to the "Adelie" class. In | ||
# the same plot, the blue color represents the certainty of **not** belonging to | ||
# the "Adelie" class. The same logic applies to the other plots in the figure. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ogrisel does this paragraph make sense to you when using a diverging colormap?
Or can you please elaborate on how the 0.5 probability cannot be interpreted under this one-vs-rest logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed this paragraph needs to be updated with the new colors (e.g. bright yellow vs dark purple). What I mean is that the chance level for a one vs rest binary classification problem that comes from a multi-class classification problem is almost never at 0.5. So using a colormap with a neutral white at 0.5 might give a false impression.
When we do one-vs-rest, we do not threshold the value of predict_proba at 0.5 to get the hard class predictions but instead concatenate of the 3 one-vs-rest predict_proba vectors into a 2D array and take the argmax across the classes dimension.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make sense to keep the colorbar
at the bottom of the plot in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe so.
Co-authored-by: Olivier Grisel <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A final batch of feedback. Otherwise LGTM. The rendering looks good.
Co-authored-by: Olivier Grisel <[email protected]>
Co-authored-by: Olivier Grisel <[email protected]>
…ree notebook (#730) Co-authored-by: ArturoAmorQ <[email protected]> Co-authored-by: Olivier Grisel <[email protected]> c9a7ad4
This PR reworks the general wording to avoid redundant text and prefer verbs in present mode.
It also adds a plot of
predict_proba
per class, as inspired by the Plot classification probability example