Skip to content
Alexey Bader edited this page Jul 6, 2020 · 2 revisions

Agenda

Meeting notes

Participants: Alexey Bader (Intel), Mariya Podchischaeva (Intel), Ronan Keryell (Xilinx), Victor Lomuller (Codeplay), Hal Finkel (ANL)

  • Opens

    • No.
  • Patches in review for llorg:

    • https://reviews.llvm.org/D80932 - [SYCL] Make default address space a superset of OpenCL address spaces
      • Ronan: How are we going to handle all the questions from Anastasia?
      • Victor: Should we get Anastasia involved into SYCL development?
      • Alexey: I'm going to write an RFC to cfe mailing list and address the questions there.
      • Victor: We should draw parallel to CUDA comparing the programming model. Other single source programming models have the same problems.
      • Victor: Here we trying to pull some OpenCL concepts in to single source.
      • Victor: CUDA doesn't resolve any address spaces - everything generic.
      • Victor: SYCL approach is somewhere in between - that's why we need to make changes.
      • Victor: New version of the spec will help by generalizing back-ends.
    • https://reviews.llvm.org/D74387 - [SYCL] Do not diagnose use of __float128
    • Done. Committed unified implementation for SYCL and OpenMP compilers.
    • https://reviews.llvm.org/D81641 - [SYCL] Implement thread-local storage restriction
    • Similar to previous case we need unify implementation to support multiple GPGPU programming models.
    • AR Mariya: to double check with Johannes whether we need to generalize the approach in the same patch.
  • GitHub issues/PRs

    • https://github.com/intel/llvm/issues/1799#issuecomment-637395726 - half implementation: __fp16 vs _Float16.
      • Mariya: after my patch with diagnostics of __float128, some tests started failing.
      • Victor: CUDA target reported that it doesn't support half data type.
      • Victor: _Float16 is used for half data type implementation
      • Victor: NVIDIA supports half, but not for all operations.
      • Victor: if we use __fp16 for half, it should work everywhere even if the target doesn't support at HW level.
      • Victor: have a PR prototype for using __fp16 in half.
  • Back-burner

Clone this wiki locally