-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Inference example and unit test for rnn_encoder_decoder #8176
Add Inference example and unit test for rnn_encoder_decoder #8176
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR. I think this looks pretty good to me. However, I would also ask for the review from @helinwang and/or @tonyyang-svail for the prune.cc
part.
paddle/framework/op_desc.cc
Outdated
@@ -124,11 +124,10 @@ OpDesc::OpDesc(const proto::OpDesc &desc, ProgramDesc *prog, BlockDesc *block) | |||
// restore attrs_ | |||
for (const proto::OpDesc::Attr &attr : desc_.attrs()) { | |||
std::string attr_name = attr.name(); | |||
// The sub_block referred to by the BLOCK attr hasn't be added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor: "hasn't be" -> "hasn't been"
if save_dirname is not None: | ||
fluid.io.save_inference_model( | ||
save_dirname, ['source_sequence', | ||
'target_sequence'], [prediction], exe) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to rethink about the decoding for inference. In this example, we take the target_embedding
as input in the lstm_decoder_without_attention()
, which requires the target_sequence. However, while doing inference, ideally we don't have access to the target, hence, the decoding while inference has to be modified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. We can refine the inference example in future PR.
Thanks @sidgoyal78 ! It would be great if @tonyyang-svail can review this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, @tonyyang-svail can take a look at his convenient time to suggest improvements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
fix #8174
fix #8170
fix #8161
fix #8062
fix #8059
The emphasize of the pr is to make the pruning method and save/load_inference_model to work with RNN related program desc containing for example while operator.
Since other RNN based book chapters like machine translation depends on this PR. We'd better review and merge it quickly.
Minor issues like this can be fixed in other PRs.
Also the C++ inference example for this chapter will probably also be added in future PRs.