-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add fallback memory resource for TCC devices #257
Conversation
42346c5
to
319a372
Compare
re-tested using synchronous malloc and free on Tesla T4 colossus instance. on main branch: (test_env) C:\cuda-python\cuda_core>python -m pytest tests\test_memory.py platform win32 -- Python 3.12.7, pytest-8.3.4, pluggy-1.5.0 tests\test_memory.py FFFF [100%] with change: (test_env) C:\cuda-python\cuda_core>python -m pytest tests\test_memory.py platform win32 -- Python 3.12.7, pytest-8.3.4, pluggy-1.5.0 tests\test_memory.py .... [100%] |
/ok to test |
Windows failures are known (#271) and irrelevant. Let's merge. Thanks, Keenan! |
For devices which don't support memory pools, we need to provide an alternate default memory resource.
This basic WAR implementation works. I used a colossus lease for a Tesla T4 on Friday Nov 29 and these were the results:
using the DefaultAsyncMempool --> python - m pytest tests/test_memory.py
=============================================== short test summary info ===============================================
FAILED tests/test_memory.py::test_buffer_initialization - cuda.core.experimental._utils.CUDAError: CUDA_ERROR_NOT_SUPPORTED: operation not supported
using the implementation in this branch --> python - m pytest tests/test_memory.py
collected 4 items
tests\test_memory.py .... (SUCCESS)
close #208