Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【Hackathon 5th No.103】move fc to phi -part #58777

Merged
merged 9 commits into from
Nov 15, 2023

Conversation

zeroRains
Copy link
Contributor

PR types

Others

PR changes

Others

Description

move fc to phi
#57262

@CLAassistant
Copy link

CLAassistant commented Nov 7, 2023

CLA assistant check
All committers have signed the CLA.

@zeroRains
Copy link
Contributor Author

在执行test_onednn_fc_activation_fuse_pass这个单测时,会调用到fc,出问题的地方在fluid源文件的185行,AppendActivation函数里,具体问题见图。

比较麻烦的一个点就是按照phi的方法是把所有用到的参数都作为函数的参数了,所以有没有什么方法,像他一样获取ctx.HasAttr("fuse_activation")这个属性呢?

image

move fc to phi

try to fix bug
@yuanlehome
Copy link
Contributor

在执行test_onednn_fc_activation_fuse_pass这个单测时,会调用到fc,出问题的地方在fluid源文件的185行,AppendActivation函数里,具体问题见图。

比较麻烦的一个点就是按照phi的方法是把所有用到的参数都作为函数的参数了,所以有没有什么方法,像他一样获取ctx.HasAttr("fuse_activation")这个属性呢?

image

ctx提供的接口都在xxx_context.h里,比如onednn是paddle/phi/backends/onednn/onednn_context.h,里面有你需要的接口

@zeroRains
Copy link
Contributor Author

在执行test_onednn_fc_activation_fuse_pass这个单测时,会调用到fc,出问题的地方在fluid源文件的185行,AppendActivation函数里,具体问题见图。
比较麻烦的一个点就是按照phi的方法是把所有用到的参数都作为函数的参数了,所以有没有什么方法,像他一样获取ctx.HasAttr("fuse_activation")这个属性呢?
image

ctx提供的接口都在xxx_context.h里,比如onednn是paddle/phi/backends/onednn/onednn_context.h,里面有你需要的接口

好的

@zeroRains
Copy link
Contributor Author

image
怪了,之前还能编译的

@zeroRains
Copy link
Contributor Author

fc 的单测和带FLAG的单测均通过

@yuanlehome
Copy link
Contributor

fc 的单测和带FLAG的单测均通过

👍,我来处理下CI和找同事reivew哈~

@luotao1 luotao1 changed the title 【Hackathon 5th No.103】move fc to phi 【Hackathon 5th No.103】move fc to phi -part Nov 13, 2023
@@ -109,6 +109,16 @@
func : fast_where_xpu
data_type : x

- op : fc
args : (Tensor input, Tensor w, Tensor bias, int in_num_col_dims = 1, str activation_type = "", bool use_mkldnn = false, bool padding_weights = false, bool use_quantizer = false, str mkldnn_data_type = "float32", float scale_in = 1.0f, float[] scale_weights = {1.0f}, float scale_out = 1.0f, bool force_fp32_output = false)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tensor input, Tensor w, Tensor bias, int in_num_col_dims = 1, 只保留这四个,其他全挪到extra里

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

float[] scale_weights = {1.0f}这个移到extra里会报错[]{}无法解析

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

6ceedfe6f8bfe7961e7396a235671e4c 加个引号

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

emm,全部移到extra里之后,算子的InferShape会对应不上,如果删掉其他在InferShape的参数属性,就不能对这些属性进行检查了。。。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

Copy link
Contributor Author

@zeroRains zeroRains Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

感觉是不能填入args中没有声明的属性

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
只填名字呢

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果解决不了,就先这样吧~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果解决不了,就先这样吧~

好吧~

只填名字也会报错
image

namespace phi {
namespace fusion {
template <typename T, typename Context>
void FCKernel(const Context& dev_ctx,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fuse类算子不需要头文件声明

Copy link
Contributor

@yuanlehome yuanlehome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@phlrain phlrain self-requested a review November 15, 2023 09:13
Copy link
Contributor

@ZzSean ZzSean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for CI-OP-Benchmark

@luotao1 luotao1 merged commit 4adac36 into PaddlePaddle:develop Nov 15, 2023
@zeroRains zeroRains deleted the 103_fc branch November 24, 2023 14:33
SecretXV pushed a commit to SecretXV/Paddle that referenced this pull request Nov 28, 2023
@yuanlehome
Copy link
Contributor

hi同学,这个kernel的迁移到phi存在问题,需要提PR修复下哈,mkldnn相关的参数不需要写到yaml中,也不需要在kernel、infermata的签名中体现。
参数传入的方法可参考:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.cc#L3545
kernel中取参数可参考:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/onednn/matmul_kernel.cc#L109

@yuanlehome
Copy link
Contributor

hi同学,这个kernel的迁移到phi存在问题,需要提PR修复下哈,mkldnn相关的参数不需要写到yaml中,也不需要在kernel、infermata的签名中体现。 参数传入的方法可参考:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/fluid/framework/operator.cc#L3545 kernel中取参数可参考:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/phi/kernels/onednn/matmul_kernel.cc#L109

修复前后,与之前的验证方法一样哈

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants