In online advertising, click-through rate (CTR) is a very important metric for evaluating ad performance. As a result, click prediction systems are essential and widely used for sponsored search and real-time bidding.
For this competition, we have provided 11 days worth of Avazu data to build and test prediction models. Can you find a strategy that beats standard classification algorithms? The winning models from this competition will be released under an open-source license.
1. folder: Summary / MAIN_IMPROVE
2. results: 0.3882
3. ranking: 55/1786
- hour: 14-10-21-00 ~ 14-10-30-23 => hour, date => night, day, holiday, weekday
- unpreprocessed benchmark random forest
- feature engineering
- one-hotted (num, less than 10k distinct value)
- hash (cate)
- feature interaction
- categorical less than 10 times transformation
- long tail feature log-transformation
- python fast learning (gradient decent)
- xgboost (gradient boosting)
- vw online learning (log-loss)
- L1, L2 tuning
- meta data
- ensemble models
31.10 = Halloween being qualitatively different from the rest of the sample
maybe training specifically on 24 (Friday) and/or 25-26 (weekend ~ holiday) would help? Holdout 2 days of data for CV, just 1 gives poorer performance (around 0.006 as you mentioned above) Split the data in one file per day Refactor the training and validation logic so I can pass a range of days I want to train/validate on.
http://cran.at.r-project.org/web/packages/FeatureHashing/index.html
- add columns for Hour , Weekday , Public Holiday
- Hash / one-hotted
- Split train based on different days
- python fast learning
- vw
- xgboost
- caret adaptive (svm)
- solution below
- ensembles
- calibration (-0.000x)
- GBDT - feature generation
- Hashing tricks - feature generation for FM(Factorization Machine)
- Factorization Machine