You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On XGBoost python library (version 1.4.2, also in 1.4.0), custom eval metric get an untransformed model score, instead of the validation prediction probability.
Code to reproduce:
importnumpyasnpimportxgboostasxgbfromsklearn.metricsimportaccuracy_scoredefcustom_eval_metric(y_pred_proba, dtrain):
y_true_label=dtrain.get_label()
print(y_pred_proba.min(), y_pred_proba.max())
acc_score=accuracy_score(
y_true_label.reshape(-1, 1),
(y_pred_proba>0.5).astype(int).reshape(-1, 1))
return'custom_acc', -1*acc_scoreif__name__=='__main__':
# Generating random training dataX_train=np.random.random(500)
X_train=X_train.reshape((100, 5))
y_train= (np.random.randint(0, 100, 100) >50).astype(int)
X_val=np.random.random(500)
X_val=X_val.reshape((100, 5))
y_val= (np.random.randint(0, 100, 100) >50).astype(int)
# training XGBoost classifierxgb_params= {
"n_estimators": 20,
"max_depth": 5,
"learning_rate": 0.03,
"verbosity": 0,
"objective": "binary:logistic",
"booster": "gbtree",
"colsample_bytree": 0.98,
"colsample_bylevel": 0.97,
"subsample": 0.98,
"disable_default_eval_metric": 1,
"random_state": 2048}
# Fit a xgboost model, you can see at each iteration, the eval metric# get the untransformed model output scoremodel=xgb.XGBClassifier(**xgb_params)
model.fit(
X_train, y_train,
eval_set=[(X_val, y_val)],
early_stopping_rounds=15,
verbose=True,
eval_metric=custom_eval_metric)
This is actually an expected behavior. Quoted from the release note for 1.2.0:
Breaking: Custom evaluation metric now receives raw prediction (#5954)
Previously, the custom evaluation metric received a transformed prediction result when used with a classifier. Now the custom metric will receive a raw (untransformed) prediction and will need to transform the prediction itself. See demo/guide-python/custom_softmax.py for an example.
This change is to make the custom metric behave consistently with the custom objective, which already receives raw prediction (#5564).
The question remains: How can we transform the raw score of the prediction that custom_eval_metric received to the prediction probability in the binary classification problem?
I have read the API codes and searched for answers for a half of a day, none of them gave an answer.
On XGBoost python library (version 1.4.2, also in 1.4.0), custom eval metric get an untransformed model score, instead of the validation prediction probability.
Code to reproduce:
Output:
Expected behaviour:
You can see negative score in each iteration. The expected output should range between 0 and 1.
Bug analysis:
See line 1643 in core.py, XGBoost version 1.4.2. This problem is caused by an unsuitable predict method parameter(output_margin=True).
The text was updated successfully, but these errors were encountered: