-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dynamic recurrent op forward c++ implentation #4597
dynamic recurrent op forward c++ implentation #4597
Conversation
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…ddle into feature/dynamic-recurrent-op
…/dynamic-recurrent-op
namespace operators { | ||
|
||
namespace detail { | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you add "namespace detail", you should put this into "paddle/operators/detail" path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reference https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/framework/tensor_array.cc#L26
I think namespace detail
is a special namespace, it is meaningless to put it into a detail directory.
|
||
} // namespace detail | ||
|
||
class DynamicRecurrentAlgorithmProtoAndCheckerMaker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not name DynamicRecurrentOpProtoAndCheckerMaker
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…/dynamic-recurrent-op
…/dynamic-recurrent-op
"names of pre-memories"); | ||
AddAttr<std::vector<std::string>>(name.memories, "names of memories"); | ||
|
||
AddComment("This is a recurrent group operator."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the difference of recurrent group operator
and recurrent operator
paddle/pybind/CMakeLists.txt
Outdated
@@ -1,6 +1,6 @@ | |||
if(WITH_PYTHON) | |||
cc_library(paddle_pybind SHARED | |||
SRCS pybind.cc exception.cc protobuf.cc | |||
DEPS pybind python backward proto_desc tensor_array | |||
DEPS pybind python backward proto_desc tensor_array dynamic_recurrent_op |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does not need a special dependency
namespace paddle { | ||
namespace operators { | ||
|
||
using framework::Scope; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are two problems:
- You may not use a using-directive to make all names from a namespace available.
- Do not use Namespace aliases at namespace scope in header files
refs: https://google.github.io/styleguide/cppguide.html#Namespaces
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scope
is not a namespace, so it should be ok.
Google-style just tells thatusing namespace xxx
is forbidden.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using
is not recommended to use in header file because it may pollute other files that include it, put into cpp file is Ok.
https://stackoverflow.com/questions/19940081/using-keyword-in-header-implementation-in-c
|
||
// call stepnet in all the time steps | ||
for (size_t step = 0; step < cache_.num_steps; step++) { | ||
stepnet_->Run(scope, dev_ctx); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
step
is not used?
// TODO(superjom) check all the inputs has the same LoD | ||
int level = 0; | ||
const auto& inlinks = cache_.inlinks; | ||
for (auto& item : inlinks) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for (const auto& item : inlinks) {...}
dy_seq_metas_[item.first] = | ||
ta.Unpack(tensor, level, true /*length_descend*/); | ||
|
||
if (cache_.num_steps) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (cache_.num_steps == 0UL) {...}
use 0 to inform the type of num_steps
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is ok. 0 equals false is clear.
…/dynamic-recurrent-op
…/dynamic-recurrent-op
auto ta_it = step_inputs_.find(item.first); | ||
PADDLE_ENFORCE(ta_it != step_inputs_.end(), | ||
"step_inputs_ not compatible with memory set"); | ||
TensorArray& ta = step_inputs_[item.first]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TensorArray& ta = step_inputs_[item.first];
==>
TensorArray& ta = ta_it->second;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! There will be a serial of pr to complete the dynamic recurrent op.
resolves: #4554