-
Notifications
You must be signed in to change notification settings - Fork 132
[BUG] OutOfMemoryError in spectrogram #434
Comments
Hey @Bruyant. Thanks for using cuSignal! It looks like you're allocating more data on the GPU than you have physical space for. You mentioned that smaller sample sizes work as expected.
Unfortunately, this is not how cuSignal works, particularly if you're allocating memory directly on GPU with |
Is there a better place to put my input datas to save memory ? |
What are you trying to do? A spectrogram on streaming data coming into the GPU? |
Thanks for your answer,
|
I much easier workaround would be to allocate with CuPy's Managed Memory allocator (https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.ManagedMemory.html#cupy.cuda.ManagedMemory & https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.malloc_managed.html) To better understand what's going on, please read https://developer.nvidia.com/blog/unified-memory-cuda-beginners/ |
This issue has been labeled |
This issue has been labeled |
I get a memory error if the number of samples > 110M in spectrogramm
Steps/Code to reproduce bug
I get an error dependending on the signal length
OutOfMemoryError: Out of memory allocating 1,280,016,384 bytes (allocated so far: 5,068,731,392 bytes).
Expected behavior
I would expect the spectrogramm function or _fft_helper to cut the datas in chunk if it do not fit the Memory of the gpu
or a keword for chunking.
Environment details:
Full traceback :
The text was updated successfully, but these errors were encountered: