-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add float16 support to dropout operator #9223
Conversation
|
||
def test_check_output(self): | ||
if core.is_compiled_with_cuda() and core.op_support_gpu("dropout"): | ||
self.check_output_with_place(core.CUDAPlace(0), atol=1e-3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TestFP16DropoutOp1
and TestFP16DropoutOp2
are very similar, and they can be inherited relationships, which can reduce the amount of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right! Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
#else | ||
HOSTDEVICE inline float16 operator+(const float16& a, const float16& b) { | ||
HOST inline float16 operator+(const float16& a, const float16& b) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe HOST
is unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess so. Let me fix that in the next PR.
fix #9222
Added device function for multiplying two float16 numbers on GPU device, which is needed in the following code: