Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CTR demo #57

Merged
merged 36 commits into from
Jun 1, 2017
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
9312f5b
init doc and model
Superjomn May 24, 2017
1ba7d6a
Merge branch 'develop' of https://github.com/PaddlePaddle/models into…
Superjomn May 24, 2017
b3e717b
finish first version
Superjomn May 25, 2017
62d9503
finish code
Superjomn May 25, 2017
912a562
fix img display
Superjomn May 25, 2017
cea8fd6
finish image display
Superjomn May 25, 2017
a1b2906
change Paddle -> PaddlePaddle
Superjomn May 25, 2017
b0e7d38
change img/ -> images/
Superjomn May 25, 2017
c126c52
add cross feature into model input
Superjomn May 26, 2017
4f6dd9d
change ` -> ~
Superjomn May 26, 2017
04fbeb5
update markdown files
Superjomn May 26, 2017
07ba10b
fix markdown display
Superjomn May 26, 2017
4f70521
wrap slashed words with ~
Superjomn May 26, 2017
d265ff5
wrap ~
Superjomn May 26, 2017
d483d65
fix markdown style
Superjomn May 26, 2017
9400539
Merge branch 'develop' of https://github.com/PaddlePaddle/models into…
Superjomn May 26, 2017
25c0570
delete org files
Superjomn May 26, 2017
d718d1e
style code
Superjomn May 26, 2017
bd9b609
code style with yapf
Superjomn May 26, 2017
8820e38
((0,1)) -> (0,1)
Superjomn May 26, 2017
99fc1b2
change no to i
Superjomn May 26, 2017
5f09c4d
add paddle.init
Superjomn May 26, 2017
537a5dd
code style
Superjomn May 26, 2017
30de2ef
delete process_markdown.py
Superjomn May 26, 2017
73bdd3b
draft edit:w
Superjomn May 26, 2017
1297198
rename images
Superjomn May 31, 2017
04725d7
fix style errors
Superjomn May 31, 2017
079a3c8
set trainer_count=1
Superjomn May 31, 2017
6ac4d31
corrected reference style
Superjomn May 31, 2017
c16dd3d
add more usage in dataset
Superjomn May 31, 2017
f96b7d9
embeddint table -> Embedding
Superjomn May 31, 2017
6e63ef4
add argument parser
Superjomn Jun 1, 2017
05b93ec
reformat references
Superjomn Jun 1, 2017
ae3d361
reformat
Superjomn Jun 1, 2017
05bb5e4
add default training command
Superjomn Jun 1, 2017
2b36b54
pass pre-commit
Superjomn Jun 1, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
255 changes: 255 additions & 0 deletions ctr/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,255 @@
<div id="table-of-contents">
<h2>Table of Contents</h2>
<div id="text-table-of-contents">
<ul>
<li><a href="#orgc299c2a">1. 背景介绍</a>
<ul>
<li><a href="#org5cc253b">1.1. LR vs DNN</a></li>
</ul>
</li>
<li><a href="#orgab346e7">2. 数据和任务抽象</a></li>
<li><a href="#org07ef211">3. Wide &amp; Deep Learning Model</a>
<ul>
<li><a href="#orgeae9b2d">3.1. 模型简介</a></li>
<li><a href="#org19637b5">3.2. 编写模型输入</a></li>
<li><a href="#orgd2cbfbd">3.3. 编写 Wide 部分</a></li>
<li><a href="#orgd78c9ff">3.4. 编写 Deep 部分</a></li>
<li><a href="#org92e3541">3.5. 两者融合</a></li>
<li><a href="#orgb4020a9">3.6. 训练任务的定义</a></li>
</ul>
</li>
<li><a href="#org8f6a6fa">4. 引用</a></li>
</ul>
</div>
</div>


<a id="orgc299c2a"></a>

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

一级标题:#点击率预估,以后各小节为二级,三级等标题。

# 背景介绍

CTR(Click-through rate) 是用来表示用户点击一个特定链接的概率,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Click-through rate --> Click-Through Rate

通常被用来衡量一个在线广告系统的有效性。

当有多个广告位时,CTR 预估一般会作为排序的基准。
比如在搜索引擎的广告系统里,当用户输入一个带商业价值的搜索词(query)时,系统大体上会执行下列步骤来展示广告:

1. 召回满足 query 的广告集合
2. 业务规则和相关性过滤
3. 根据拍卖机制和 CTR 排序
4. 展出广告

可以看到,CTR 在最终排序中起到了很重要的作用。

在业内,CTR 模型经历了如下的发展阶段:

- Logistic Regression(LR) / GBDT + 特征工程
- LR + DNN 特征
- DNN + 特征工程

在发展早期时 LR 一统天下,但最近 DNN 模型由于其强大的学习能力和逐渐成熟的性能优化,
逐渐地接过 CTR 预估任务的大旗。


<a id="org5cc253b"></a>

## LR vs DNN

下图展示了 LR 和一个 \(3x2\) 的 NN 模型的结构:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NN 是不是应该改为DNN更合适一些?因为上文并没有出现 NN 这个术语。


![img](./images/lr-vs-dnn.jpg)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 图没有居中
  2. 缺少图题
  3. 图片的命名统一使用“_”代替“-”和repo中其他例子保持一致。"lr-vs-dnn.jpg" --> "lr_vs_dnn.jpg"
  4. 和其它例子保持一致,使用下面的标记:


Figure 1. ×


LR 部分和蓝色箭头部分可以直接类比到 NN 中的结构,可以看到 LR 和 NN 有一些共通之处(比如权重累加),
但前者的模型复杂度在相同输入维度下比后者可能低很多(从某方面讲,模型越复杂,越有潜力学习到更复杂的信息)。

如果 LR 要达到匹敌 NN 的学习能力,必须增加输入的维度,也就是增加特征的数量,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NN --> DNN。上文提出了DNN,但是没有提到NN。会为阅读者带来困惑。

这也就是为何 LR 和大规模的特征工程必须绑定在一起的原因。

LR 对于 NN 模型的优势是对大规模稀疏特征的容纳能力,包括内存和计算量等方面,工业界都有非常成熟的优化方法。

而 NN 模型具有自己学习新特征的能力,一定程度上能够提升特征使用的效率,
这使得 NN 模型在同样规模特征的情况下,更有可能达到更好的学习效果。

本文后面的章节会演示如何使用 PaddlePaddle 编写一个结合两者优点的模型。


<a id="orgab346e7"></a>

# 数据和任务抽象
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

每一篇只有一个一级标题,这里修改为二级标题


我们可以将 `click` 作为学习目标,具体任务可以有以下几种方案:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

具体任务可以有以下几种方案: --> 具体的,任务可以有以下几种方案:


1. 直接学习 click,0,1 作二元分类
2. Learning to rank, 具体用 pairwise rank(标签 1>0)或者 list rank
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

list --> listwise

3. 统计每个广告的点击率,将同一个 query 下的广告两两组合,点击率高的>点击率低的,做 rank 或者分类

我们直接使用第一种方法做分类任务。

我们使用 Kaggle 上 `Click-through rate prediction` 任务的数据集[3] 来演示模型。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

引用的标记符号请使用以下:
[3]


具体的特征处理方法参看 [data process](./dataset.md)


<a id="org07ef211"></a>

# Wide & Deep Learning Model

谷歌在 16 年提出了 Wide & Deep Learning 的模型框架,用于融合适合学习抽象特征的 DNN 和 适用于大规模稀疏特征的 LR 两种模型的优点。


<a id="orgeae9b2d"></a>

## 模型简介

Wide & Deep Learning Model 可以作为一种相对成熟的模型框架使用,
在 CTR 预估的任务中工业界也有一定的应用,因此本文将演示使用此模型来完成 CTR 预估的任务。

模型结构如下:

![img](./images/wide-deep.png)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

图片的引用标记需要修正,1. 未居中,2. 缺图题, 3. 命名用“_”代替“-”


模型左边的 Wide 部分,可以容纳大规模系数特征,并且对一些特定的信息(比如 ID)有一定的记忆能力;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

笔误:系数 --> 稀疏

而模型右边的 Deep 部分,能够学习特征间的隐含关系,在相同数量的特征下有更好的学习和推导能力。


<a id="org19637b5"></a>

## 编写模型输入

模型只接受 3 个输入,分别是

- `dnn_input` ,也就是 Deep 部分的输入
- `lr_input` ,也就是 Wide 部分的输入
- `click` , 点击与否,作为二分类模型学习的标签

```python
dnn_merged_input = layer.data(
name='dnn_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['dnn_input']))

lr_merged_input = layer.data(
name='lr_input',
type=paddle.data_type.sparse_binary_vector(data_meta_info['lr_input']))

click = paddle.layer.data(name='click', type=dtype.dense_vector(1))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

135~ 136 多余的空行删除

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done



```

<a id="orgd2cbfbd"></a>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些标记先从markdown中删除,后面html统一渲染。


## 编写 Wide 部分

Wide 部分直接使用了 LR 模型,但激活函数改成了 `RELU` 来加速

```python
def build_lr_submodel():
fc = layer.fc(
input=lr_merged_input, size=1, name='lr', act=paddle.activation.Relu())
return fc


```

<a id="orgd78c9ff"></a>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些标记先从markdown 删除,后面html统一渲染。


## 编写 Deep 部分

Deep 部分使用了标准的多层前向传导的 NN 模型
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NN 还是 DNN,或者两者皆可(那需要引入一下NN这个术语)需要在全文统一一下。


```python
def build_dnn_submodel(dnn_layer_dims):
dnn_embedding = layer.fc(input=dnn_merged_input, size=dnn_layer_dims[0])
_input_layer = dnn_embedding
for no, dim in enumerate(dnn_layer_dims[1:]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no,重名名为 i,idx,num等吧。

fc = layer.fc(
input=_input_layer,
size=dim,
act=paddle.activation.Relu(),
name='dnn-fc-%d' % no)
_input_layer = fc
return _input_layer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

172 ~ 173 多余的空行去掉。



```

<a id="org92e3541"></a>

## 两者融合

两个 submodel 的最上层输出加权求和得到整个模型的输出,输出部分使用 `sigmoid` 作为激活函数,得到区间\((0,1)\) 的预测值,
来逼近训练数据中二元类别的分布,最终作为 CTR 预估的值使用。

```python
# conbine DNN and LR submodels
def combine_submodels(dnn, lr):
merge_layer = layer.concat(input=[dnn, lr])
fc = layer.fc(
input=merge_layer,
size=1,
name='output',
# use sigmoid function to approximate ctr rate, a float value between 0 and 1.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a float value between 0 and 1. --> which outputs a float value between 0 and 1.

act=paddle.activation.Sigmoid())
return fc


```

<a id="orgb4020a9"></a>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些标记先从markdown中删除,后面html统一渲染。


## 训练任务的定义

```python
dnn = build_dnn_submodel(dnn_layer_dims)
lr = build_lr_submodel()
output = combine_submodels(dnn, lr)

# ==============================================================================
# cost and train period
# ==============================================================================
classification_cost = paddle.layer.multi_binary_label_cross_entropy_cost(
input=output, label=click)

params = paddle.parameters.create(classification_cost)

optimizer = paddle.optimizer.Momentum(momentum=0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

没有 paddle.init()不会出问题吗?


trainer = paddle.trainer.SGD(
cost=classification_cost, parameters=params, update_equation=optimizer)

dataset = AvazuDataset(train_data_path, n_records_as_test=test_set_size)

def event_handler(event):
if isinstance(event, paddle.event.EndIteration):
if event.batch_id % 100 == 0:
logging.warning("Pass %d, Samples %d, Cost %f" % (
event.pass_id, event.batch_id * batch_size, event.cost))

if event.batch_id % 1000 == 0:
result = trainer.test(
reader=paddle.batch(dataset.test, batch_size=1000),
feeding=field_index)
logging.warning("Test %d-%d, Cost %f" % (event.pass_id, event.batch_id,
result.cost))


trainer.train(
reader=paddle.batch(
paddle.reader.shuffle(dataset.train, buf_size=500),
batch_size=batch_size),
feeding=field_index,
event_handler=event_handler,
num_passes=100)


```
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 增加一个章节:##运行训练和测试
  2. 略微做一个简单的,step by step 的描述来解释 clone 了这个repo的用户该如何一步一步执行本例中的这套脚本,例如包括以下内容:
    • 先运行哪个脚本下载数据/准备环境。
    • 运行哪个脚本启动训练任务,是否需要修改某些参数。
    • 告诉用户那个脚本负责读数据,如果需要feed 自己的数据,应该修改哪个脚本。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


<a id="org8f6a6fa"></a>

# 引用
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 使用二级标题,## 参考文献,目前每一篇里面只保留一个一级标题。
  2. 参考文献直接使用数字列表,去掉方括号。在引用文献的地方使用: [1] 这样的标记。
  3. 论文也请附上链接


- [1] <https://en.wikipedia.org/wiki/Click-through_rate>
- [2] Mikolov, Tomáš, et al. "Strategies for training large scale neural network language models." Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on. IEEE, 2011.
- [3] <https://www.kaggle.com/c/avazu-ctr-prediction/data>
- [4] Cheng, Heng-Tze, et al. "Wide & deep learning for recommender systems." Proceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM, 2016.

Loading