Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add target_confidence_assign_op #7921

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
135 changes: 135 additions & 0 deletions paddle/operators/target_confidence_assign_op.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#include "paddle/operators/target_confidence_assign_op.h"

namespace paddle {
namespace operators {

class TargetConfidenceAssignOp : public framework::OperatorWithKernel {
public:
using framework::OperatorWithKernel::OperatorWithKernel;

void InferShape(framework::InferShapeContext* ctx) const override {
PADDLE_ENFORCE(
ctx->HasInput("Conf"),
"Input(Conf) of TargetConfidenceAssignOp should not be null");
PADDLE_ENFORCE(
ctx->HasInput("GTLabels"),
"Input(GTLabels) of TargetConfidenceAssignOp should not be null");
PADDLE_ENFORCE(ctx->HasInput("MatchIndices"),
"Input(MatchIndices) of TargetConfidenceAssignOp should "
"not be null");
PADDLE_ENFORCE(
ctx->HasInput("NegIndices"),
"Input(NegIndices) of TargetConfidenceAssignOp should not be null");

PADDLE_ENFORCE(
ctx->HasOutput("ConfGT"),
"Output(ConfGT) of TargetConfidenceAssignOp should not be null.");
PADDLE_ENFORCE(
ctx->HasOutput("ConfPred"),
"Output(ConfPred) of TargetConfidenceAssignOp should not be null.");

auto conf_dims = ctx->GetInputDim("Conf");
auto gt_dims = ctx->GetInputDim("GTLabels");
auto mi_dims = ctx->GetInputDim("MatchIndices");
auto neg_dims = ctx->GetInputDim("NegIndices");
PADDLE_ENFORCE_EQ(conf_dims.size(), 3UL,
"The rank of Input(Conf) must be 3, the shape is "
"[batch_size, prior_box_num, class_num].");
PADDLE_ENFORCE_EQ(gt_dims.size(), 2UL,
"The rank of Input(GTLabels) must be 2, the shape is "
"[N, 1].");
PADDLE_ENFORCE_EQ(mi_dims.size(), 2UL,
"The rank of Input(MatchIndices) must be 2, the shape is "
"[batch_size, prior_box_num].");
PADDLE_ENFORCE_EQ(neg_dims.size(), 2UL,
"The rank of Input(NegIndices) must be 2, the shape is "
"[N, 1].");

PADDLE_ENFORCE_EQ(conf_dims[0], mi_dims[0],
"The batch_size of Input(Conf) and "
"Input(MatchIndices) must be the same.");

PADDLE_ENFORCE_EQ(conf_dims[1], mi_dims[1],
"The prior_box_num of Input(Loc) and "
"Input(MatchIndices) must be the same.");
PADDLE_ENFORCE_EQ(gt_dims[1], 1UL,
"The shape of Input(GTLabels) is [N, 1].");
PADDLE_ENFORCE_EQ(neg_dims[1], 1UL,
"The shape of Input(NegIndices) is [Nneg, 1].");
}

protected:
framework::OpKernelType GetExpectedKernelType(
const framework::ExecutionContext& ctx) const override {
return framework::OpKernelType(
framework::ToDataType(ctx.Input<framework::Tensor>("Conf")->type()),
ctx.device_context());
}
};

class TargetConfidenceAssignOpMaker : public framework::OpProtoAndCheckerMaker {
public:
TargetConfidenceAssignOpMaker(OpProto* proto, OpAttrChecker* op_checker)
: OpProtoAndCheckerMaker(proto, op_checker) {
AddInput("Conf",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the NMS Op, I use the name score. We can unify the name. Score or Conf, which is better? (I also can change my PR, just unify here)

"(Tensor, default Tensor<float>), The input confidence "
"predictions.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to give shape.

AddInput(
"GTLabels",
"(LoDTensor, default LoDTensor<int>), The input ground-truth labels.");
AddInput("MatchIndices",
"(LoDTensor, default LoDTensor<int>), The input matched indices, "
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LoDTensor -> Tensor

"When it's equal to -1, it doesn't match any entity.");
AddInput("NegIndices",
"(LoDTensor, default LoDTensor<int>), The input negative example "
"indics.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Type error: indics -> indices

AddOutput("ConfGT",
"(LoDTensor), The output ground-truth labels filtered by "
"MatchIndices and append NegIndices examples.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这句话的语法不同,谁append?

AddOutput("ConfPred",
"(LoDTensor), The output confidence predictions filtered by "
"MatchIndices and append NegIndices examples.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同样,在修改下描述。

AddAttr<int>("background_label_id",
"(int, default 0), Label id for background class.")
.SetDefault(0);
AddComment(R"DOC(
TargetConfidenceAssign operator

Filter ground-truth labels when the corresponding MatchIndices is not -1,
and append negative examples with label background_label_id,
it produces the output ConfGT.
Filter confidence predictions when the corresponding MatchIndices is not -1,
and append negative examples' confidence prediction.
it produces the output ConfPred.

)DOC");
}
};

} // namespace operators
} // namespace paddle

namespace ops = paddle::operators;
REGISTER_OP_WITHOUT_GRADIENT(target_confidence_assign,
ops::TargetConfidenceAssignOp,
ops::TargetConfidenceAssignOpMaker);
REGISTER_OP_CPU_KERNEL(
target_confidence_assign,
ops::TargetConfidenceAssignOpKernel<paddle::platform::CPUDeviceContext,
float>,
ops::TargetConfidenceAssignOpKernel<paddle::platform::CPUDeviceContext,
double>);
100 changes: 100 additions & 0 deletions paddle/operators/target_confidence_assign_op.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
/* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. */

#pragma once
#include "paddle/framework/eigen.h"
#include "paddle/framework/op_registry.h"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kernel only supports CPU, the code in .h file can be moved to .cc file.


namespace paddle {
namespace operators {

template <typename DeviceContext, typename T>
class TargetConfidenceAssignOpKernel : public framework::OpKernel<T> {
public:
void Compute(const framework::ExecutionContext& ctx) const override {
auto* in_conf = ctx.Input<framework::Tensor>("Conf");
auto* in_gt_labels = ctx.Input<framework::LoDTensor>("GTLabels");
auto* in_match_indices = ctx.Input<framework::LoDTensor>("MatchIndices");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LoDTensor -> Tensor

auto* in_neg_indices = ctx.Input<framework::LoDTensor>("NegIndices");

auto* out_conf_gt = ctx.Output<framework::LoDTensor>("ConfGT");
auto* out_conf_pred = ctx.Output<framework::LoDTensor>("ConfPred");
int background_label_id = ctx.Attr<int>("background_label_id");

auto in_conf_dim = in_conf->dims();
auto gt_lod = in_gt_labels->lod();
auto neg_indices_lod = in_neg_indices->lod();
int batch_size = in_conf_dim[0];
int prior_num = in_conf_dim[1];
int class_num = in_conf_dim[2];

auto conf = framework::EigenTensor<T, 3>::From(*in_conf);
auto gt_labels = framework::EigenTensor<int, 2>::From(*in_gt_labels);
auto match_indices =
framework::EigenTensor<int, 2>::From(*in_match_indices);
auto neg_indices = framework::EigenTensor<int, 2>::From(*in_neg_indices);

int match_num = 0;
int neg_num = in_neg_indices->dims()[0];
for (int n = 0; n < batch_size; ++n) {
for (int p = 0; p < prior_num; ++p) {
if (match_indices(n, p) != -1) match_num++;
}
}

framework::LoD out_lod;
out_lod.resize(1);
out_lod[0].push_back(0);
out_conf_gt->mutable_data<int>(
framework::make_ddim({match_num + neg_num, 1}), ctx.GetPlace());
out_conf_pred->mutable_data<T>(
framework::make_ddim({match_num + neg_num, class_num}), ctx.GetPlace());

auto conf_gt = framework::EigenTensor<int, 2>::From(*out_conf_gt);
auto conf_pred = framework::EigenTensor<T, 2>::From(*out_conf_pred);

int count = 0;
for (int n = 0; n < batch_size; ++n) {
for (int p = 0; p < prior_num; ++p) {
int idx = match_indices(n, p);
if (idx == -1) continue;
int gt_start = gt_lod[0][n];
int gt_offset = gt_start + idx;
int label = gt_labels(gt_offset);
conf_gt(count) = label;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line 72 - line 75可合并:

为了方便 line 36可以取出最后一个level的,这些op里只支持一层LoD,取之前应该check下 n_gt_labels->lod().size() == 1UL

auto gt_lod = in_gt_labels->lod().back();   
conf_gt(count) = gt_labels(gt_lod[0] + idx);

for (int c = 0; c < class_num; ++c) {
conf_pred(count, c) = conf(n, p, c);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

连续赋值用 std::copy

count += 1;
}

int neg_start = neg_indices_lod[0][n];
int neg_end = neg_indices_lod[0][n + 1];
for (int ne = neg_start; ne < neg_end; ++ne) {
int idx = neg_indices(ne);
conf_gt(count) = background_label_id;
for (int c = 0; c < class_num; ++c) {
conf_pred(count, c) = conf(n, idx, c);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上~

count += 1;
}
out_lod[0].push_back(count);
}
out_conf_gt->set_lod(out_lod);
out_conf_pred->set_lod(out_lod);
}
};

} // namespace operators
} // namespace paddle
Loading