Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bfloat16 value type support #266

Closed
xiangzez opened this issue Jul 25, 2022 · 3 comments · Fixed by #268
Closed

bfloat16 value type support #266

xiangzez opened this issue Jul 25, 2022 · 3 comments · Fixed by #268

Comments

@xiangzez
Copy link
Contributor

TFRA supports many value types including float16, but bfloat16 type is not there. Are there any specific reasons?

I tried to add a BF16 value type kernel for cuckoo hashtable op and it seems to work. And it is easier to converge than FP16 in training.

@rhdong
Copy link
Member

rhdong commented Jul 25, 2022

Thank you @xiangzez , there're no specific reasons, you can commit a PR for this, thank you~

@rhdong
Copy link
Member

rhdong commented Jul 26, 2022

@xiangzez
Additionally, at the beginning, we can't confirm if it works well that TF supports bf16 on CPUs, so we didn't open the data type for bf16. Since you have already made enough testing, I believe we can try to open it, could you commit a PR?

@xiangzez
Copy link
Contributor Author

Yes TF now supports bf16 well on CPU. I'll submit a PR for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants