Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not emit diagnostic with extended device lambdas with preserved re… #1495

Merged
merged 10 commits into from
Apr 1, 2024

Conversation

Revaj
Copy link
Contributor

@Revaj Revaj commented Mar 6, 2024

Description

Corresponding from #1457

Allows extended lambdas with queryable return types for invoke_result_t

Checklist

  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

@Revaj Revaj requested review from a team as code owners March 6, 2024 04:31
@Revaj Revaj requested review from griwes and gonidelis March 6, 2024 04:31
Copy link

copy-pr-bot bot commented Mar 6, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@jrhemstad
Copy link
Collaborator

@miscco could you help @Revaj with adding a test that ensures the return type for a __device__ and __host__ __device__ lambda can be successfully queried with cuda::std::invoke_result_t iff it uses a trailing return type?

@Revaj
Copy link
Contributor Author

Revaj commented Mar 7, 2024

I'll be adding a test like this

auto d_lm = [] __device__ () -> float { return 3.14f; };
auto dh_lm = [] __host__ __device__ () -> float { return 3.14f; };
using Td = decltype(d_lm);
using Tdh = decltype(dh_lm);

ASSERT_SAME_TYPE(cuda::std::is_same_v<cuda::std::invoke_result_t<Td>,
                                   float>);
ASSERT_SAME_TYPE(cuda::std::is_same_v<cuda::std::invoke_result_t<Tdh>,
                                   float>);

Is there a specific location where should be add it?

@miscco
Copy link
Collaborator

miscco commented Mar 7, 2024

Is there a specific location where should be add it?

The tests are here libcudacxx/test/libcudacxx/cuda/proclaim_return_type.pass.cpp

@Revaj
Copy link
Contributor Author

Revaj commented Mar 8, 2024

I noticed that std/utilities/meta/meta.trans/meta.trans.other/result_of.fail.cpp it supposed it should fail

FAIL: libcu++ :: std/utilities/meta/meta.trans/meta.trans.other/result_of.fail.cpp (18 of 2029)
******************** TEST 'libcu++ :: std/utilities/meta/meta.trans/meta.trans.other/result_of.fail.cpp' FAILED ********************
Command: ['/usr/local/cuda/bin/nvcc', '-o', '/dev/null', '-x', 'cu', '/home/coder/cccl/libcudacxx/test/libcudacxx/std/utilities/meta/meta.trans/meta.trans.other/result_of.fail.cpp', '-c', '-std=c++17', '-ftemplate-depth=270', '-ccbin=/usr/bin/g++', '-include', '/home/coder/cccl/libcudacxx/test/support/nasty_macros.h', '-I/home/coder/cccl/libcudacxx/include', '-D__STDC_FORMAT_MACROS', '-D__STDC_LIMIT_MACROS', '-D__STDC_CONSTANT_MACROS', '-Xcompiler', '-fno-exceptions', '-Xcompiler', '-fno-rtti', '-D_LIBCUDACXX_NO_RTTI', '-I/home/coder/cccl/libcudacxx/test/support', '-D_CCCL_NO_SYSTEM_HEADER', '-include', '/home/coder/cccl/libcudacxx/test/libcudacxx/force_include.h', '--compiler-options=-Wall', '--compiler-options=-Wextra', '--extended-lambda', '-gencode=arch=compute_80,code=sm_80', '-c']
Exit Code: 0

Expected compilation to fail!

Probably related to the PR since the compiler diagnostic was updated. Should I update the test to expect to pass instead?

@miscco
Copy link
Collaborator

miscco commented Mar 8, 2024

Probably related to the PR since the compiler diagnostic was updated. Should I update the test to expect to pass instead?

We should keep that test as failing, as it does fail in older nvcc versions that do not support the trailing return type

You can add a // UNSUPPORTED: nvcc-12.X && nvcc-12.Y line where X would be the first CTK version that would pass

@Revaj
Copy link
Contributor Author

Revaj commented Mar 9, 2024

Test added in proclaim_return_type.pass.cpp

@jrhemstad
Copy link
Collaborator

/ok to test

@Revaj
Copy link
Contributor Author

Revaj commented Mar 12, 2024

I updated the test because the support of CUDA 12.3 below and std version.

@gonidelis
Copy link
Member

@Revaj please rebase so we oktotest it :)

@Revaj Revaj force-pushed the invoke_lambda_type branch from 5169b38 to 24d0d2c Compare March 14, 2024 14:18
@Revaj
Copy link
Contributor Author

Revaj commented Mar 14, 2024

@Revaj please rebase so we oktotest it :)

Done 😀

@jrhemstad
Copy link
Collaborator

/ok to test

@jrhemstad
Copy link
Collaborator

/ok to test

@jrhemstad jrhemstad enabled auto-merge (squash) March 15, 2024 15:19
auto-merge was automatically disabled March 16, 2024 02:27

Head branch was pushed to by a user without write access

@Revaj Revaj force-pushed the invoke_lambda_type branch from 1209076 to 7f71edf Compare March 16, 2024 02:27
@Revaj
Copy link
Contributor Author

Revaj commented Mar 16, 2024

For some reason, one of the tests failed from CI

The test

~/cccl/libcudacxx/test/libcudacxx/std/utilities/time/time.traits/time.traits.duration_values/max.pass.cpp

@Revaj
Copy link
Contributor Author

Revaj commented Mar 22, 2024

I merged by error the branch, I'm going to fix it when I can :(

@jrhemstad
Copy link
Collaborator

/ok to test

@Revaj Revaj force-pushed the invoke_lambda_type branch from 64c8d37 to 7f71edf Compare March 23, 2024 02:25
@Revaj Revaj force-pushed the invoke_lambda_type branch from 7f71edf to df96141 Compare March 23, 2024 02:26
@Revaj
Copy link
Contributor Author

Revaj commented Mar 23, 2024

I have fixed the merge error in the branch so now it should be ok

@jrhemstad
Copy link
Collaborator

/ok to test

@jrhemstad
Copy link
Collaborator

The windows CI failures look unrelated. Rerunning those jobs.

@Revaj
Copy link
Contributor Author

Revaj commented Mar 27, 2024

It seems the tests passed and it's ready to merge 😃

@jrhemstad jrhemstad merged commit e85d243 into NVIDIA:main Apr 1, 2024
584 checks passed
@jrhemstad
Copy link
Collaborator

Thanks @Revaj !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

5 participants