-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of Memory Sort Fails even with Spill over #57
Comments
@randerzander . FYI. |
@pentschev care to try the example above? |
Sorry, I totally missed this before. I checked the example and I can reproduce. I believe we still have bugs on the memory spilling, which wasn't really tested before besides my synthetic test case. This is now on the top of my priority list, since now we seem to have more people who need this and we would like to have this working properly for 0.8. |
What I found out that memory is really the issue, in the case described here, the GPU has 16GB of memory. Trying that example with So what’s happening is that cuDF will allocate more memory that is proportional to the On side channels @VibhuJawa pointed out that the chunk sizes have a non-negligible impact on performance, so this is definitely something we want to improve in the future, but for the time being, using smaller chunk sizes is the only solution here. |
Could you try configuring RMM to use managed memory and see how that works? You would use
before For a managed memory pool, you can do:
|
I've finally got a chance to test this and I can confirm enabling RMM's managed memory works for me. I simply made sure each worker enables managed memory, and the adapted code from #57 (comment) looks like this: from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
import cudf, dask_cudf
# Use dask-cuda to start one worker per GPU on a single-node system
# When you shutdown this notebook kernel, the Dask cluster also shuts down.
cluster = LocalCUDACluster(ip='0.0.0.0',n_workers=1, device_memory_limit='10000 MiB')
client = Client(cluster)
# # print client info
print(client)
# Code to simulate_data
def generate_file(output_file,rows=100):
with open(output_file, 'wb') as f:
f.write(b'A,B,C,D,E,F,G,H,I,J,K\n')
f.write(b'22,697,56,0.0,0.0,0.0,0.0,0.0,0.0,0,0\n23,697,56,0.0,0.0,0.0,0.0,0.0,0.0,0,0\n'*(rows//2))
f.close()
# generate the test file
output_file='test.csv'
# Uncomment below
generate_file(output_file,rows=100_000_000)
def initialize_rmm():
from librmm_cffi import librmm_config as rmm_cfg
import cudf
cudf.rmm.finalize()
rmm_cfg.use_managed_memory = True
return cudf.rmm.initialize()
client.run(initialize_rmm)
# reading it using dask_cudf
df = dask_cudf.read_csv(output_file,chunksize='100 MiB')
print(df.head(10).to_pandas())
# reading it using dask_cudf
df = df.sort_values(['A','B','C']) Similarly, the code in #65 (comment) works too. @VibhuJawa could you try it out as well? |
Note that with this PR from @shwina, it's even easier to set the allocator mode: rapidsai/cudf#2682 It should be as easy as just doing:
|
One additional comment, I did also some profiling on First, without setting a Since the copying of data to host in Dask requires allocating NumPy arrays in host memory, this is one of the limiting factors (which when using managed memory on C++ libraries, such as cuDF, is essentially free since we don't necessarily have to pay the price of memory allocation). However, I worked on a fix for NumPy that was released only in 1.17.1 (numpy/numpy#14216 -- not yet available through conda, only pip) and it reduces execution time from 4m20s-4m40s down to 3m10s-4m10s. @mrocklin these numbers may interest you as well. |
@jrhemstad, does |
@jakirkham yes, the RMM documentation has details on that: https://github.com/rapidsai/rmm#cuda-managed-memory |
CC: @beckernick , see thread for |
We just spent time chatting about resolving this issue with @beckernick . It was suggested to try CUDA Managed Memory. This solution has the draw back of not being supported with UCX message passing and thus the user would have to use TCP. Estimate for UCX support of CUDA Managed Memory will probably happen near the end of this year though this can potentially change with different resource allocation |
What else can we do in the short term? |
I don't think we have any real alternatives to using managed memory to solve this issue, but @jrhemstad may have ideas on how we could emulate managed memory somehow on the RMM side without actual managed memory. If we don't have any real alternatives as I believe to be the case, I think the best way to handle this is to focus on speeding up managed memory support in UCX. |
It might also be worth doing a mental exercise to see what speed-of-light
would be here relative to doing this operation just on the CPU.
If we're using a combination of device and host memory to perform an
out-of-core sort we're sort of assuming that the speedup we get from doing
the in-memory sort operation on the GPU gives us a bigger boost than the
cost of the device-to-host roundtrip transfer. This might not be the case.
…On Thu, Jan 9, 2020 at 3:52 AM Peter Andreas Entschev < ***@***.***> wrote:
I don't think we have any real alternatives to using managed memory to
solve this issue, but @jrhemstad <https://github.com/jrhemstad> may have
ideas on how we could emulate managed memory somehow on the RMM side
without actual managed memory.
If we don't have any real alternatives as I believe to be the case, I
think the best way to handle this is to focus on speeding up managed memory
support in UCX.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#57?email_source=notifications&email_token=AACKZTFL2PX73RBQFG3SUBLQ44FZBA5CNFSM4HQH4472YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIQBODA#issuecomment-572528396>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACKZTBZB4JDB2WBVHDYSBDQ44FZBANCNFSM4HQH447Q>
.
|
SOL we can sort 1TB on 16 GPUs in ~70 seconds right now. I believe MapR holds the record on 1004x (4-core) nodes at ~50 seconds. Given a DGX-2 has <60 cores seems like SOL to SOL favors GPU. |
Does that include moving data between device and host? That was the main
intent of my question. My guess is that when you introduce that bottleneck
that things slow down a lot, possibly to the point where just using CPU is
faster.
…On Thu, Jan 9, 2020 at 11:02 AM Joshua Patterson ***@***.***> wrote:
SOL we can sort 1TB on 16 GPUs in ~70 seconds right now. I believe MapR
holds the record on 1004x (4-core) nodes at ~50 seconds. Given a DGX-2 has
<60 cores seems like SOL to SOL favors GPU.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#57?email_source=notifications&email_token=AACKZTHEWRQMXX6KRY76QALQ45YCTA5CNFSM4HQH4472YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIRMYWA#issuecomment-572705880>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACKZTEJCRVAFWBWAOV5LCLQ45YCTANCNFSM4HQH447Q>
.
|
That is from disk (csv) and writing back to disk. |
What if we copied managed memory over to non-managed memory before sending it? |
I would then argue that we will increase the memory footprint in such a case, and that's probably going to limit problem sizes even more. That said, my bias/preferred action would continue to be on working for managed memory support in UCX sooner. |
We could reuse the same buffer to copy into. Admittedly it will still increase the memory footprint, but by a fixed amount. It may also help with other issues (buffer registration). Agree this is a workaround. However if we are trying to do something sooner than fixing UCX, this could be one path to doing that. |
I see now that you're suggesting a fixed memory footprint, which didn't occur to me before. In that case, there's still a big issue: for larger buffers, we would need to do multiple copies, that would incur in various synchronization steps and decrease performance too. I'm very inclined to believe that the gain we would have by using managed memory in this case would be consumed by this behavior. I'm not opposed to someone trying that out if one has the bandwidth to do it, but I still think that time would be better spent working directly with UCX folks to solve the issue at core. |
So I may be misunderstanding, but the motivation for using managed memory was to avoid a crash not improve performance. Is that correct? If so, I don't think we need to be concerned about degraded performance because it would still run (which is infinitely better 😉). In any event, it is very difficult to reason correctly about how much of a performance penalty one would take. Much easier to run it and benchmark it. |
It's actually twofold: there may be an OOM crash, but managed memory has also a significant improvement in performance with TCP, which I can't explain but it's very useful. Apart from that, there's another issue I forgot to mention -- which is arguably more important -- and is that we can't currently have any managed memory if we're using UCX, all those allocations will be captured by UCX. IOW, the application must not contain any managed memory, if it does the application crashes as we have no way to enforce UCX not to capture that memory. |
@jakirkham @kkraus14 and I were discussing this offline, so to set expectations straight: “To share device memory pointers and events across processes, an application must use the Inter Process Communication API, which is described in detail in the reference manual. The IPC API is only supported for 64-bit processes on Linux and for devices of compute capability 2.0 and higher. Note that the IPC API is not supported for cudaMallocManaged allocations.” Reference: https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#interprocess-communication |
If that's the case, could we just only use TCP with managed memory? Or is there a downside here? |
I'm not sure I understand your question. If we use TCP, then there's no NVLink, which means transfers are limited by TCP bandwidth. |
Maybe I'm not understanding. Is this useful or should we ignore it? |
With TCP it's useful, but for the reasons we discussed above, we can't use managed memory with UCX, so in this last case it's not useful (even though my initial hopes were that it would work and we would see a performance improvement with UCX too). |
This is likely that managed memory is optimizing the device <--> host memory transfer which can provide a 4x speedup over a naive non-pinned device <--> host memory transfer. |
@kkraus14 by "optimizing" you mean that it internally uses pinned memory or are you suggesting there are even further optimizations? |
Internally it uses pinned memory and then it tries to be smart about how it prefetches memory and overlaps the transfers with compute. |
Yeah we might look at doing this (spilling to pinned memory) ourselves. There has been some discussion about adding some sort of pinned memory support with RMM. Though there is likely some work needed on the dask-cuda side to spill to pinned memory. Not entirely sure what this will look like yet. |
If we do look into spilling to pinned memory with RMM, issue ( rapidsai/rmm#260 ) is tracking the relevant work needed/done. |
With TPCx-BB efforts being largely successful, I'd say this has been fixed or improved substantially, is that correct @VibhuJawa ? Are we good closing this or should we keep it open? |
@pentschev , Yup, with all the TPCxbb work this has indeed improved a lot. This is good to close in my book too. |
Out of Memory Sort still seems to be failing even with device spill PR merged. (#51) .
The memory still seems to linearly grow which causes
RuntimeError: parallel_for failed: out of memory
.Error Trace :
The text was updated successfully, but these errors were encountered: