-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.OutOfMemoryError: CUDA out of memory. while performing peft curation with sdg on default configs #520
Comments
cc: @VibhuJawa , @sarahyurick & @ruchaa-apte in case you have suggestions. |
Not able to handle large dataset |
Is somebody provide the way to handle large datasets for semantic deduplication |
I have encountered this issue. How about the volume of your data? |
Upto 5 million samples I have used the semantic deduplication but it failed if samples goes beyond 800k or larger size samples. |
Hi, thanks for raising. Could you give some more details around the GPUs and environment? How many GPUs is this running on. How much memory do you have per GPU? Based on the thread it seems like semdedup runs into OOM errors beyond 800k samples. What's the size of embeddings in this dataset? |
Steps/Code to Reproduce Bug
Please provide minimal steps or a code snippet to reproduce the bug.
Using dataset size of 749,000 samples.
Running fine-tuning on allenai/tulu-3-sft-olmo-2-mixture.
Utilizing the latest NVIDIA drivers.
2025-02-05 07:27:31,421 - distributed.worker - ERROR - Compute Failed
Key: ('lambda-619f7ac64f13a38ca6c6546e6af3af28', 10)
State: executing
Task: <Task ('lambda-619f7ac64f13a38ca6c6546e6af3af28', 10) reify(...)>
Exception: "OutOfMemoryError('CUDA out of memory. Tried to allocate 59.96 GiB. GPU 0 has a total capacity of 79.10 GiB of which 17.46 GiB is free. Process 363562 has 61.61 GiB memory in use. Of the allocated memory 60.08 GiB is allocated by PyTorch, and 284.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)')"
Traceback: ' File "/usr/local/lib/python3.10/dist-packages/dask/bag/core.py", line 1875, in reify\n seq = list(seq)\n File "/usr/local/lib/python3.10/dist-packages/dask/bag/core.py", line 2063, in next\n return self.f(*vals)\n File "/usr/local/lib/python3.10/dist-packages/nemo_curator/modules/semantic_dedup.py", line 524, in \n lambda cluster_id: get_semantic_matches_per_cluster(\n File "/usr/local/lib/python3.10/dist-packages/nemo_curator/utils/semdedup_utils.py", line 272, in get_semantic_matches_per_cluster\n M, M1 = _semdedup(cluster_reps, "cuda")\n File "/usr/local/lib/python3.10/dist-packages/nemo_curator/utils/semdedup_utils.py", line 193, in _semdedup\n triu_sim_mat = torch.triu(pair_w_sim_matrix, diagonal=1)\n'
#####################
How I launch this script on multi GPU to avoid cuda out of memory
The text was updated successfully, but these errors were encountered: