Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Generalize prediction cache. #8783

Merged
merged 5 commits into from
Feb 13, 2023
Merged

Conversation

trivialfis
Copy link
Member

@trivialfis trivialfis commented Feb 10, 2023

Generalize prediction cache for other uses.

  • Extract most of the functionality into DMatrixCache.
  • Limit the size of the prediction cache to 32 items. This should be plenty. When the number is exceeded, it's usually because the user forgets to free the input DMatrix during inference, and by limiting the size of the cache, we are actually improving performance for those cases.
  • Move API entry struct to an independent file to reduce dependency on the predictor.h file.
  • Add a test.

I'm working on learning to rank related changes. One hurdle I run into is how to cache sorted indexes inside Metric. Currently, the cox metric adds a member function to meta info, while the AUC has an ad-hoc cache. This PR proposes we reuse the prediction cache in these places.

* Extract most of the functionality into `DMatrixCache`.
* Move API entry to independent file to reduce dependency on `predictor.h` file.
* Add test.
tidy.
include/xgboost/cache.h Outdated Show resolved Hide resolved
include/xgboost/cache.h Outdated Show resolved Hide resolved
include/xgboost/predictor.h Outdated Show resolved Hide resolved
tests/cpp/test_cache.cc Show resolved Hide resolved
@trivialfis
Copy link
Member Author

@hcho3 Thank you for the review, all comments are addressed.

@trivialfis trivialfis requested a review from hcho3 February 12, 2023 17:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants