-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with Storing and Accessing Large Data (>12-13 Bytes) via Public Gateway #164
Comments
Hey! Thank you for the report. I never done much testing with rust-ipfs and public gateways lately (since that havent been a priority for me at the moment), but the last I did test I do know it sometimes falls down to connectivity, if the content is being provided on DHT as well as the bitswap implementation (which under your fork uses beetle-bitswap by default, which should work better when dealing with gateways). I didnt have time to do a full review of the code youre using (can do that later on today), but from a quick skim there are some things I can suggestions to see if it helps any:
|
I've updated the node configurations here using the
So, I'm assuming the issue might be with how the node connects to the IPFS network, but it still doesn't explain why the small datasets are accessible on the public gateways, and as soon as the data size crosses a certain threshold, it becomes inaccessible. |
Thank you for your response.
Could you add the other bootstrap nodes and maybe try calling
I do find it interesting that is a problem after a specific amount of data. Would it also be an issue If you were to run a local gateway and have your instance connect to that gateway instead? Are you connecting over any relays and if so, does it use dcutr properly? (you would likely have to look at the logs for this AND this is assuming that upnp isnt working or isnt a option in your environment - best to check firewall and network equipment in that case to be sure). If it doesnt, the small amount of data might make sense because relay v2 defaults to about 128k of data before the connection resets since it expects dcutr to kick in by that time if both peers support the protocol and there isnt any issues preventing usage of that protocol. |
Did a little testing and there is only some instances where I've noticed that there arent as many responses to a gateway. but not specific to any specific amount of data. |
Description
I'm integrating rust-ipfs into a Substrate blockchain to enable decentralized storage capabilities for our nodes. The integration involves using offchain workers to interact with an IPFS node, managed by rust-ipfs, for storing and retrieving data. While testing this setup, I've encountered an issue where I'm unable to access data larger than approximately 12 to 13 bytes through a public IPFS gateway. Smaller data sizes work as expected and are accessible without issues.
Steps to Reproduce
Expected Behavior
Data of any size, when stored on IPFS using rust-ipfs through our Substrate blockchain integration, should be retrievable via public IPFS gateways.
Actual Behavior
When attempting to access data larger than 12 to 13 bytes through a public gateway, the request fails (504: Gateway Timeout Error). Smaller data sizes are retrievable without any issues.
Additional Information
Rust-IPFS version: forked rust-ipfs
Substrate version: polkadot-v0.9.43
I suspect this might be related to how rust-ipfs handles data chunking or broadcasting of CID announcements to the IPFS network, particularly for larger data sizes. However, I am not entirely sure if the issue lies within the configuration of the rust-ipfs node, the data storage process, or the retrieval/query mechanism.
Request for Assistance
Could you provide insights or recommendations on how to address this issue? Specifically, I am looking for:
Thank you for your support and looking forward to your guidance on resolving this challenge.
The text was updated successfully, but these errors were encountered: