-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ProvideMany: high memory usage when providing tens of millions of CIDs #354
Comments
Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review.
Finally, remember to use https://discuss.ipfs.io if you just need general support. |
CC @ischasny |
2022-10-18 conversation: we aren't aware of this being a blocker at the moment so not prioritizing currently, but feedback welcome if this needs to be moved up sooner. |
@ajnavarro as per our discussion, might be a good idea to chunk up the CIDs snapshot into smaller pieces so that at least we don't squeeze all of them into a single HTTP request. That is problematic on larger nodes (like web3 storage). Snapshots don't get reprovided even with high router timeouts. Maybe we can do that only for reframe router initially? That should save us some memory on both sending and receiving side. Wdyt? |
Yeah, won't be the final solution, but will help in providing over HTTP. |
Great! Would you guys be up for taking it into the next release? Should be simple to do and would unblock us too. |
Related issue: ipfs/go-delegated-routing#55 |
When using BatchProviding, we are not really batching, but sending all the CIDs at the same time to the Router implementing ProvideMany.
To avoid collateral problems, we should actually batch the calls to ProvideMany.
This will help with Reframe Router implementation (https://github.com/ipfs/go-delegated-routing) to avoid huge JSON payloads sent to the server.
We need to find good defaults to still keep FullRT DHT implementation with good performance numbers.
That is like 1/10th of the memory spike observed, we are still searching for other possible problems.
The text was updated successfully, but these errors were encountered: