-
Notifications
You must be signed in to change notification settings - Fork 721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set check_additivity=False for CausalForestDML #458
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we only adding this flag in that call? If this is a problem about correlation of variables and our use of background masks, and not about trees in particular wouldn't this appear elsewhere?
Also we could in principle make this flag a public facing variable with default being false.
This is going to be updated. After digging into different explainers and shap doc examples, for tree explainer, we shouldn't input masker, and by default they will use the trained sample stored in estimator object, but for other explainer, masker is required. However the issue is when we call generic However, there are some other issues around this (e.g. TreeExplainer with or without masker output different shape of base_values, for TreeExplainer without masker base_value doesn't equal to mean of cate anymore), I am looking into it now. |
Take it back, we should still input masker for TreeExplainer and go with "interventional" feature pertubation, and just disable check additivity. Since we have access to training(background) dataset, "interventional" should be preferred and it gives causal explanation, and LinearExplainer use "interventional" approach as well. |
No description provided.